Dec 03 13:50:50.497113 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 13:50:50.721436 master-0 kubenswrapper[4808]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:50:50.723639 master-0 kubenswrapper[4808]: I1203 13:50:50.721270 4808 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726816 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726862 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726868 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726881 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726886 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726895 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726900 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726905 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726911 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726916 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726920 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726924 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726946 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726954 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:50:50.726915 master-0 kubenswrapper[4808]: W1203 13:50:50.726960 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726965 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726971 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726976 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726980 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726984 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726991 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.726997 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727001 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727006 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727011 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727016 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727020 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727025 4808 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727029 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727033 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727038 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727043 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727050 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:50:50.727692 master-0 kubenswrapper[4808]: W1203 13:50:50.727055 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727060 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727065 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727069 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727074 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727078 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727084 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727090 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727096 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727103 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727107 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727112 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727117 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727124 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727129 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727134 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727138 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727142 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727147 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:50:50.728437 master-0 kubenswrapper[4808]: W1203 13:50:50.727154 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727160 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727165 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727170 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727175 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727179 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727184 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727189 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727193 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727198 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727203 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727208 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727212 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727217 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727221 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727226 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727230 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727235 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727239 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: W1203 13:50:50.727244 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: I1203 13:50:50.727463 4808 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 13:50:50.729026 master-0 kubenswrapper[4808]: I1203 13:50:50.727484 4808 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727496 4808 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727515 4808 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727532 4808 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727544 4808 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727554 4808 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727563 4808 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727569 4808 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727582 4808 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727589 4808 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727601 4808 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727609 4808 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727616 4808 flags.go:64] FLAG: --cgroup-root="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727621 4808 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727627 4808 flags.go:64] FLAG: --client-ca-file="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727632 4808 flags.go:64] FLAG: --cloud-config="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727638 4808 flags.go:64] FLAG: --cloud-provider="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727643 4808 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727650 4808 flags.go:64] FLAG: --cluster-domain="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727655 4808 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727661 4808 flags.go:64] FLAG: --config-dir="" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727666 4808 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727672 4808 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727680 4808 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 13:50:50.729722 master-0 kubenswrapper[4808]: I1203 13:50:50.727686 4808 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727691 4808 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727698 4808 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727704 4808 flags.go:64] FLAG: --contention-profiling="false" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727711 4808 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727718 4808 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727724 4808 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727730 4808 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727738 4808 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727743 4808 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727758 4808 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727795 4808 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727801 4808 flags.go:64] FLAG: --enable-server="true" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727808 4808 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727817 4808 flags.go:64] FLAG: --event-burst="100" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727824 4808 flags.go:64] FLAG: --event-qps="50" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727829 4808 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727836 4808 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727841 4808 flags.go:64] FLAG: --eviction-hard="" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727848 4808 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727854 4808 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727859 4808 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727865 4808 flags.go:64] FLAG: --eviction-soft="" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727871 4808 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727876 4808 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 13:50:50.730494 master-0 kubenswrapper[4808]: I1203 13:50:50.727881 4808 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727886 4808 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727891 4808 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727896 4808 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727901 4808 flags.go:64] FLAG: --feature-gates="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727908 4808 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727913 4808 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727919 4808 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727925 4808 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727931 4808 flags.go:64] FLAG: --healthz-port="10248" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727969 4808 flags.go:64] FLAG: --help="false" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727976 4808 flags.go:64] FLAG: --hostname-override="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727981 4808 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727987 4808 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727992 4808 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.727998 4808 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728003 4808 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728023 4808 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728049 4808 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728055 4808 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728060 4808 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728066 4808 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728071 4808 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728076 4808 flags.go:64] FLAG: --kube-reserved="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728082 4808 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728087 4808 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 13:50:50.731246 master-0 kubenswrapper[4808]: I1203 13:50:50.728093 4808 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728098 4808 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728103 4808 flags.go:64] FLAG: --lock-file="" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728108 4808 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728114 4808 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728119 4808 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728129 4808 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728134 4808 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728140 4808 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728146 4808 flags.go:64] FLAG: --logging-format="text" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728151 4808 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728157 4808 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728163 4808 flags.go:64] FLAG: --manifest-url="" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728169 4808 flags.go:64] FLAG: --manifest-url-header="" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728177 4808 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728183 4808 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728190 4808 flags.go:64] FLAG: --max-pods="110" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728195 4808 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728201 4808 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728207 4808 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728212 4808 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728218 4808 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728224 4808 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728230 4808 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 13:50:50.732085 master-0 kubenswrapper[4808]: I1203 13:50:50.728251 4808 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728281 4808 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728288 4808 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728294 4808 flags.go:64] FLAG: --pod-cidr="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728300 4808 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728327 4808 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728334 4808 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728340 4808 flags.go:64] FLAG: --pods-per-core="0" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728345 4808 flags.go:64] FLAG: --port="10250" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728351 4808 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728358 4808 flags.go:64] FLAG: --provider-id="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728363 4808 flags.go:64] FLAG: --qos-reserved="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728369 4808 flags.go:64] FLAG: --read-only-port="10255" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728374 4808 flags.go:64] FLAG: --register-node="true" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728380 4808 flags.go:64] FLAG: --register-schedulable="true" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728385 4808 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728397 4808 flags.go:64] FLAG: --registry-burst="10" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728403 4808 flags.go:64] FLAG: --registry-qps="5" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728408 4808 flags.go:64] FLAG: --reserved-cpus="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728414 4808 flags.go:64] FLAG: --reserved-memory="" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728422 4808 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728427 4808 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728434 4808 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728441 4808 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728447 4808 flags.go:64] FLAG: --runonce="false" Dec 03 13:50:50.732860 master-0 kubenswrapper[4808]: I1203 13:50:50.728453 4808 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728459 4808 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728465 4808 flags.go:64] FLAG: --seccomp-default="false" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728471 4808 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728492 4808 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728498 4808 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728504 4808 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728510 4808 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728523 4808 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728529 4808 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728535 4808 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728541 4808 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728547 4808 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728553 4808 flags.go:64] FLAG: --system-cgroups="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728562 4808 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728572 4808 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728577 4808 flags.go:64] FLAG: --tls-cert-file="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728583 4808 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728591 4808 flags.go:64] FLAG: --tls-min-version="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728597 4808 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728606 4808 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728612 4808 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728617 4808 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728622 4808 flags.go:64] FLAG: --v="2" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728641 4808 flags.go:64] FLAG: --version="false" Dec 03 13:50:50.733969 master-0 kubenswrapper[4808]: I1203 13:50:50.728650 4808 flags.go:64] FLAG: --vmodule="" Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: I1203 13:50:50.728657 4808 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: I1203 13:50:50.728664 4808 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728793 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728801 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728806 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728810 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728815 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728820 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728825 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728830 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728844 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728854 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728864 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728873 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728878 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728882 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728888 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728894 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728899 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:50:50.734710 master-0 kubenswrapper[4808]: W1203 13:50:50.728903 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728909 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728915 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728920 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728926 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728931 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728936 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728944 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728949 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728955 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728961 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728967 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728971 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728976 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728981 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728986 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728991 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.728995 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:50:50.735391 master-0 kubenswrapper[4808]: W1203 13:50:50.729000 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729005 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729009 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729013 4808 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729018 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729023 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729030 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729034 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729040 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729045 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729050 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729056 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729061 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729065 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729069 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729073 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729078 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729082 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729087 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729091 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:50:50.735894 master-0 kubenswrapper[4808]: W1203 13:50:50.729096 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729103 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729108 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729113 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729118 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729123 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729129 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729134 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729138 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729142 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729147 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729151 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729156 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729161 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729166 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729170 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:50:50.736762 master-0 kubenswrapper[4808]: W1203 13:50:50.729175 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:50:50.737206 master-0 kubenswrapper[4808]: I1203 13:50:50.729461 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:50:50.742289 master-0 kubenswrapper[4808]: I1203 13:50:50.742151 4808 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 13:50:50.742289 master-0 kubenswrapper[4808]: I1203 13:50:50.742237 4808 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742350 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742362 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742370 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742376 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742390 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742394 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742405 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742409 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742415 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742420 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742424 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742429 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742434 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742439 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742444 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742448 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742452 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742457 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:50:50.742591 master-0 kubenswrapper[4808]: W1203 13:50:50.742462 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742468 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742478 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742484 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742488 4808 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742493 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742498 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742502 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742507 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742511 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742515 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742519 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742524 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742529 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742533 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742551 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742558 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742563 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742568 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742578 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:50:50.743160 master-0 kubenswrapper[4808]: W1203 13:50:50.742582 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742586 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742591 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742595 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742600 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742604 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742609 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742615 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742621 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742625 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742630 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742635 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742639 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742644 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742648 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742653 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742657 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742662 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742666 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742671 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:50:50.744075 master-0 kubenswrapper[4808]: W1203 13:50:50.742676 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742680 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742685 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742689 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742694 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742699 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742703 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742709 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742714 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742719 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742723 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742727 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742733 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742738 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: I1203 13:50:50.742746 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:50:50.744893 master-0 kubenswrapper[4808]: W1203 13:50:50.742886 4808 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742898 4808 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742903 4808 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742909 4808 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742915 4808 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742924 4808 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742929 4808 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742934 4808 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742941 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742946 4808 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742951 4808 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742956 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742961 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742966 4808 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742971 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742976 4808 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742980 4808 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742985 4808 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742990 4808 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742995 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:50:50.745390 master-0 kubenswrapper[4808]: W1203 13:50:50.742999 4808 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743003 4808 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743008 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743012 4808 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743017 4808 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743023 4808 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743028 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743033 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743037 4808 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743042 4808 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743048 4808 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743053 4808 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743058 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743064 4808 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743070 4808 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743074 4808 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743079 4808 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743084 4808 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743088 4808 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:50:50.746014 master-0 kubenswrapper[4808]: W1203 13:50:50.743094 4808 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743098 4808 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743103 4808 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743108 4808 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743113 4808 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743118 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743123 4808 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743128 4808 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743133 4808 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743137 4808 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743142 4808 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743147 4808 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743152 4808 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743156 4808 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743161 4808 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743166 4808 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743172 4808 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743180 4808 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743186 4808 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743190 4808 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:50:50.746596 master-0 kubenswrapper[4808]: W1203 13:50:50.743194 4808 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743200 4808 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743205 4808 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743209 4808 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743214 4808 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743219 4808 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743224 4808 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743228 4808 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743233 4808 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743237 4808 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743243 4808 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743248 4808 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: W1203 13:50:50.743252 4808 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: I1203 13:50:50.743278 4808 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:50:50.747187 master-0 kubenswrapper[4808]: I1203 13:50:50.743912 4808 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 13:50:50.747684 master-0 kubenswrapper[4808]: I1203 13:50:50.746809 4808 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 03 13:50:50.747852 master-0 kubenswrapper[4808]: I1203 13:50:50.747818 4808 server.go:997] "Starting client certificate rotation" Dec 03 13:50:50.747893 master-0 kubenswrapper[4808]: I1203 13:50:50.747871 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 13:50:50.748281 master-0 kubenswrapper[4808]: I1203 13:50:50.748139 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 13:50:50.758999 master-0 kubenswrapper[4808]: I1203 13:50:50.758918 4808 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:50:50.765221 master-0 kubenswrapper[4808]: I1203 13:50:50.764648 4808 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:50:50.766837 master-0 kubenswrapper[4808]: E1203 13:50:50.766748 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:50.776409 master-0 kubenswrapper[4808]: I1203 13:50:50.776336 4808 log.go:25] "Validated CRI v1 runtime API" Dec 03 13:50:50.780941 master-0 kubenswrapper[4808]: I1203 13:50:50.780780 4808 log.go:25] "Validated CRI v1 image API" Dec 03 13:50:50.783781 master-0 kubenswrapper[4808]: I1203 13:50:50.783530 4808 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 13:50:50.787066 master-0 kubenswrapper[4808]: I1203 13:50:50.786961 4808 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 13:50:50.787066 master-0 kubenswrapper[4808]: I1203 13:50:50.787008 4808 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Dec 03 13:50:50.806036 master-0 kubenswrapper[4808]: I1203 13:50:50.805576 4808 manager.go:217] Machine: {Timestamp:2025-12-03 13:50:50.803972304 +0000 UTC m=+0.244270249 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5051321c-b7a7-4bc8-b64a-b5b2f6df7e9d Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:9e:f4:18:ab:cf:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 13:50:50.806036 master-0 kubenswrapper[4808]: I1203 13:50:50.805894 4808 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 13:50:50.806036 master-0 kubenswrapper[4808]: I1203 13:50:50.806061 4808 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 13:50:50.807474 master-0 kubenswrapper[4808]: I1203 13:50:50.807429 4808 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 13:50:50.807758 master-0 kubenswrapper[4808]: I1203 13:50:50.807669 4808 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 13:50:50.808048 master-0 kubenswrapper[4808]: I1203 13:50:50.807714 4808 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 13:50:50.808048 master-0 kubenswrapper[4808]: I1203 13:50:50.808005 4808 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 13:50:50.808048 master-0 kubenswrapper[4808]: I1203 13:50:50.808017 4808 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 13:50:50.808316 master-0 kubenswrapper[4808]: I1203 13:50:50.808299 4808 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:50:50.808361 master-0 kubenswrapper[4808]: I1203 13:50:50.808346 4808 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:50:50.808567 master-0 kubenswrapper[4808]: I1203 13:50:50.808544 4808 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:50:50.808984 master-0 kubenswrapper[4808]: I1203 13:50:50.808955 4808 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 13:50:50.810136 master-0 kubenswrapper[4808]: I1203 13:50:50.810104 4808 kubelet.go:418] "Attempting to sync node with API server" Dec 03 13:50:50.810136 master-0 kubenswrapper[4808]: I1203 13:50:50.810136 4808 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 13:50:50.810224 master-0 kubenswrapper[4808]: I1203 13:50:50.810178 4808 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 13:50:50.810224 master-0 kubenswrapper[4808]: I1203 13:50:50.810195 4808 kubelet.go:324] "Adding apiserver pod source" Dec 03 13:50:50.810542 master-0 kubenswrapper[4808]: I1203 13:50:50.810514 4808 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 13:50:50.813340 master-0 kubenswrapper[4808]: I1203 13:50:50.813281 4808 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 13:50:50.819008 master-0 kubenswrapper[4808]: W1203 13:50:50.818841 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:50.820556 master-0 kubenswrapper[4808]: E1203 13:50:50.820480 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:50.820866 master-0 kubenswrapper[4808]: W1203 13:50:50.819010 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:50.821225 master-0 kubenswrapper[4808]: E1203 13:50:50.820925 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:50.822803 master-0 kubenswrapper[4808]: I1203 13:50:50.822765 4808 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 13:50:50.823385 master-0 kubenswrapper[4808]: I1203 13:50:50.823351 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823390 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823412 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823430 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823439 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823451 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823465 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 13:50:50.823472 master-0 kubenswrapper[4808]: I1203 13:50:50.823476 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 13:50:50.824105 master-0 kubenswrapper[4808]: I1203 13:50:50.823490 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 13:50:50.824105 master-0 kubenswrapper[4808]: I1203 13:50:50.823499 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 13:50:50.824105 master-0 kubenswrapper[4808]: I1203 13:50:50.823512 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 13:50:50.824105 master-0 kubenswrapper[4808]: I1203 13:50:50.823525 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 13:50:50.824274 master-0 kubenswrapper[4808]: I1203 13:50:50.824131 4808 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 13:50:50.824813 master-0 kubenswrapper[4808]: I1203 13:50:50.824771 4808 server.go:1280] "Started kubelet" Dec 03 13:50:50.825513 master-0 kubenswrapper[4808]: I1203 13:50:50.825394 4808 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 13:50:50.825686 master-0 kubenswrapper[4808]: I1203 13:50:50.825659 4808 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 13:50:50.825883 master-0 kubenswrapper[4808]: I1203 13:50:50.825648 4808 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 13:50:50.826462 master-0 kubenswrapper[4808]: I1203 13:50:50.826440 4808 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 13:50:50.826613 master-0 kubenswrapper[4808]: I1203 13:50:50.826549 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:50.826875 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 13:50:50.827926 master-0 kubenswrapper[4808]: I1203 13:50:50.827863 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 13:50:50.828019 master-0 kubenswrapper[4808]: I1203 13:50:50.827976 4808 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 13:50:50.828731 master-0 kubenswrapper[4808]: E1203 13:50:50.828692 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:50:50.828814 master-0 kubenswrapper[4808]: I1203 13:50:50.828762 4808 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 13:50:50.828814 master-0 kubenswrapper[4808]: I1203 13:50:50.828781 4808 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 13:50:50.828893 master-0 kubenswrapper[4808]: I1203 13:50:50.828818 4808 server.go:449] "Adding debug handlers to kubelet server" Dec 03 13:50:50.829171 master-0 kubenswrapper[4808]: I1203 13:50:50.828976 4808 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 13:50:50.829171 master-0 kubenswrapper[4808]: I1203 13:50:50.829088 4808 reconstruct.go:97] "Volume reconstruction finished" Dec 03 13:50:50.829171 master-0 kubenswrapper[4808]: I1203 13:50:50.829111 4808 reconciler.go:26] "Reconciler: start to sync state" Dec 03 13:50:50.830200 master-0 kubenswrapper[4808]: E1203 13:50:50.829223 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db8d444ba14bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,LastTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:50:50.830613 master-0 kubenswrapper[4808]: W1203 13:50:50.830492 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:50.830676 master-0 kubenswrapper[4808]: E1203 13:50:50.830640 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:50.831388 master-0 kubenswrapper[4808]: E1203 13:50:50.831083 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 13:50:50.835371 master-0 kubenswrapper[4808]: I1203 13:50:50.835205 4808 factory.go:153] Registering CRI-O factory Dec 03 13:50:50.835702 master-0 kubenswrapper[4808]: I1203 13:50:50.835664 4808 factory.go:221] Registration of the crio container factory successfully Dec 03 13:50:50.835872 master-0 kubenswrapper[4808]: I1203 13:50:50.835838 4808 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 13:50:50.835872 master-0 kubenswrapper[4808]: I1203 13:50:50.835871 4808 factory.go:55] Registering systemd factory Dec 03 13:50:50.835988 master-0 kubenswrapper[4808]: I1203 13:50:50.835882 4808 factory.go:221] Registration of the systemd container factory successfully Dec 03 13:50:50.835988 master-0 kubenswrapper[4808]: I1203 13:50:50.835968 4808 factory.go:103] Registering Raw factory Dec 03 13:50:50.836062 master-0 kubenswrapper[4808]: I1203 13:50:50.835993 4808 manager.go:1196] Started watching for new ooms in manager Dec 03 13:50:50.837559 master-0 kubenswrapper[4808]: I1203 13:50:50.837536 4808 manager.go:319] Starting recovery of all containers Dec 03 13:50:50.837967 master-0 kubenswrapper[4808]: E1203 13:50:50.837848 4808 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 13:50:50.857914 master-0 kubenswrapper[4808]: I1203 13:50:50.857811 4808 manager.go:324] Recovery completed Dec 03 13:50:50.868645 master-0 kubenswrapper[4808]: I1203 13:50:50.868572 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:50.870419 master-0 kubenswrapper[4808]: I1203 13:50:50.870329 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:50.870509 master-0 kubenswrapper[4808]: I1203 13:50:50.870439 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:50.870509 master-0 kubenswrapper[4808]: I1203 13:50:50.870453 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:50.871831 master-0 kubenswrapper[4808]: I1203 13:50:50.871746 4808 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 13:50:50.871831 master-0 kubenswrapper[4808]: I1203 13:50:50.871762 4808 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 13:50:50.871831 master-0 kubenswrapper[4808]: I1203 13:50:50.871787 4808 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:50:50.875225 master-0 kubenswrapper[4808]: I1203 13:50:50.875161 4808 policy_none.go:49] "None policy: Start" Dec 03 13:50:50.876556 master-0 kubenswrapper[4808]: I1203 13:50:50.876471 4808 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 13:50:50.876556 master-0 kubenswrapper[4808]: I1203 13:50:50.876551 4808 state_mem.go:35] "Initializing new in-memory state store" Dec 03 13:50:50.929357 master-0 kubenswrapper[4808]: E1203 13:50:50.929028 4808 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:50:50.946936 master-0 kubenswrapper[4808]: I1203 13:50:50.946652 4808 manager.go:334] "Starting Device Plugin manager" Dec 03 13:50:50.947132 master-0 kubenswrapper[4808]: I1203 13:50:50.946978 4808 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 13:50:50.947132 master-0 kubenswrapper[4808]: I1203 13:50:50.947000 4808 server.go:79] "Starting device plugin registration server" Dec 03 13:50:50.947467 master-0 kubenswrapper[4808]: I1203 13:50:50.947444 4808 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 13:50:50.947618 master-0 kubenswrapper[4808]: I1203 13:50:50.947471 4808 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 13:50:50.948134 master-0 kubenswrapper[4808]: I1203 13:50:50.948092 4808 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 13:50:50.948373 master-0 kubenswrapper[4808]: I1203 13:50:50.948341 4808 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 13:50:50.948373 master-0 kubenswrapper[4808]: I1203 13:50:50.948368 4808 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 13:50:50.949693 master-0 kubenswrapper[4808]: E1203 13:50:50.949637 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:50:50.998403 master-0 kubenswrapper[4808]: I1203 13:50:50.998192 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 13:50:51.004471 master-0 kubenswrapper[4808]: I1203 13:50:51.004385 4808 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 13:50:51.004737 master-0 kubenswrapper[4808]: I1203 13:50:51.004497 4808 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 13:50:51.004737 master-0 kubenswrapper[4808]: I1203 13:50:51.004567 4808 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 13:50:51.004737 master-0 kubenswrapper[4808]: E1203 13:50:51.004685 4808 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 03 13:50:51.006288 master-0 kubenswrapper[4808]: W1203 13:50:51.006131 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:51.006366 master-0 kubenswrapper[4808]: E1203 13:50:51.006294 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:51.034224 master-0 kubenswrapper[4808]: E1203 13:50:51.034132 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 13:50:51.048504 master-0 kubenswrapper[4808]: I1203 13:50:51.048303 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.050159 master-0 kubenswrapper[4808]: I1203 13:50:51.049933 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.050159 master-0 kubenswrapper[4808]: I1203 13:50:51.050008 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.050159 master-0 kubenswrapper[4808]: I1203 13:50:51.050029 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.050159 master-0 kubenswrapper[4808]: I1203 13:50:51.050074 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:51.051201 master-0 kubenswrapper[4808]: E1203 13:50:51.051146 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:51.105617 master-0 kubenswrapper[4808]: I1203 13:50:51.105406 4808 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Dec 03 13:50:51.105617 master-0 kubenswrapper[4808]: I1203 13:50:51.105592 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.107097 master-0 kubenswrapper[4808]: I1203 13:50:51.107022 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.107097 master-0 kubenswrapper[4808]: I1203 13:50:51.107076 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.107097 master-0 kubenswrapper[4808]: I1203 13:50:51.107085 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.108049 master-0 kubenswrapper[4808]: I1203 13:50:51.107320 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.108049 master-0 kubenswrapper[4808]: I1203 13:50:51.107718 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.108049 master-0 kubenswrapper[4808]: I1203 13:50:51.107833 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.108623 master-0 kubenswrapper[4808]: I1203 13:50:51.108577 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.108673 master-0 kubenswrapper[4808]: I1203 13:50:51.108638 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.108673 master-0 kubenswrapper[4808]: I1203 13:50:51.108658 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.108912 master-0 kubenswrapper[4808]: I1203 13:50:51.108883 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.109488 master-0 kubenswrapper[4808]: I1203 13:50:51.109013 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.109488 master-0 kubenswrapper[4808]: I1203 13:50:51.109048 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.109488 master-0 kubenswrapper[4808]: I1203 13:50:51.109123 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.109488 master-0 kubenswrapper[4808]: I1203 13:50:51.109144 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.109488 master-0 kubenswrapper[4808]: I1203 13:50:51.109152 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.109912 master-0 kubenswrapper[4808]: I1203 13:50:51.109717 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.109912 master-0 kubenswrapper[4808]: I1203 13:50:51.109745 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.109912 master-0 kubenswrapper[4808]: I1203 13:50:51.109755 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.110074 master-0 kubenswrapper[4808]: I1203 13:50:51.110038 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.110108 master-0 kubenswrapper[4808]: I1203 13:50:51.110086 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.110108 master-0 kubenswrapper[4808]: I1203 13:50:51.110100 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.110304 master-0 kubenswrapper[4808]: I1203 13:50:51.110285 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.110687 master-0 kubenswrapper[4808]: I1203 13:50:51.110628 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.110788 master-0 kubenswrapper[4808]: I1203 13:50:51.110758 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.111087 master-0 kubenswrapper[4808]: I1203 13:50:51.111063 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.111553 master-0 kubenswrapper[4808]: I1203 13:50:51.111520 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.111607 master-0 kubenswrapper[4808]: I1203 13:50:51.111562 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.111967 master-0 kubenswrapper[4808]: I1203 13:50:51.111933 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.112155 master-0 kubenswrapper[4808]: I1203 13:50:51.112125 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.112209 master-0 kubenswrapper[4808]: I1203 13:50:51.112180 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.115381 master-0 kubenswrapper[4808]: I1203 13:50:51.115207 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.115381 master-0 kubenswrapper[4808]: I1203 13:50:51.115241 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.115381 master-0 kubenswrapper[4808]: I1203 13:50:51.115287 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.116607 master-0 kubenswrapper[4808]: I1203 13:50:51.115886 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.116706 master-0 kubenswrapper[4808]: I1203 13:50:51.116656 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.116706 master-0 kubenswrapper[4808]: I1203 13:50:51.116679 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.116812 master-0 kubenswrapper[4808]: I1203 13:50:51.116728 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.116812 master-0 kubenswrapper[4808]: I1203 13:50:51.116760 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.116812 master-0 kubenswrapper[4808]: I1203 13:50:51.116805 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.117852 master-0 kubenswrapper[4808]: I1203 13:50:51.117810 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.117930 master-0 kubenswrapper[4808]: I1203 13:50:51.117874 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.119022 master-0 kubenswrapper[4808]: I1203 13:50:51.118973 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.119087 master-0 kubenswrapper[4808]: I1203 13:50:51.119036 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.119087 master-0 kubenswrapper[4808]: I1203 13:50:51.119048 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.230992 master-0 kubenswrapper[4808]: I1203 13:50:51.230917 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.230992 master-0 kubenswrapper[4808]: I1203 13:50:51.230975 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.231196 master-0 kubenswrapper[4808]: I1203 13:50:51.231025 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.231196 master-0 kubenswrapper[4808]: I1203 13:50:51.231066 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.231196 master-0 kubenswrapper[4808]: I1203 13:50:51.231170 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231223 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231251 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231305 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231332 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231358 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.231391 master-0 kubenswrapper[4808]: I1203 13:50:51.231379 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231408 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231492 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231508 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231526 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.231610 master-0 kubenswrapper[4808]: I1203 13:50:51.231559 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.252324 master-0 kubenswrapper[4808]: I1203 13:50:51.252094 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.254738 master-0 kubenswrapper[4808]: I1203 13:50:51.254694 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.254812 master-0 kubenswrapper[4808]: I1203 13:50:51.254760 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.254812 master-0 kubenswrapper[4808]: I1203 13:50:51.254780 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.254883 master-0 kubenswrapper[4808]: I1203 13:50:51.254843 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:51.255646 master-0 kubenswrapper[4808]: E1203 13:50:51.255601 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:51.332404 master-0 kubenswrapper[4808]: I1203 13:50:51.332243 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.332404 master-0 kubenswrapper[4808]: I1203 13:50:51.332422 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.332738 master-0 kubenswrapper[4808]: I1203 13:50:51.332552 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.332738 master-0 kubenswrapper[4808]: I1203 13:50:51.332552 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.332738 master-0 kubenswrapper[4808]: I1203 13:50:51.332663 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.332738 master-0 kubenswrapper[4808]: I1203 13:50:51.332722 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.332854 master-0 kubenswrapper[4808]: I1203 13:50:51.332747 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.332854 master-0 kubenswrapper[4808]: I1203 13:50:51.332816 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.332926 master-0 kubenswrapper[4808]: I1203 13:50:51.332855 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.332926 master-0 kubenswrapper[4808]: I1203 13:50:51.332762 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.332926 master-0 kubenswrapper[4808]: I1203 13:50:51.332768 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333021 master-0 kubenswrapper[4808]: I1203 13:50:51.332956 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333021 master-0 kubenswrapper[4808]: I1203 13:50:51.332969 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333039 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.332987 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.332982 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333121 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333140 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333159 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333178 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333224 master-0 kubenswrapper[4808]: I1203 13:50:51.333206 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333282 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333218 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333335 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333358 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333399 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333417 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333440 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333451 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333465 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.333575 master-0 kubenswrapper[4808]: I1203 13:50:51.333515 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.333906 master-0 kubenswrapper[4808]: I1203 13:50:51.333526 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.436151 master-0 kubenswrapper[4808]: E1203 13:50:51.436022 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 13:50:51.450615 master-0 kubenswrapper[4808]: I1203 13:50:51.450504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:50:51.475253 master-0 kubenswrapper[4808]: I1203 13:50:51.475143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:50:51.494857 master-0 kubenswrapper[4808]: I1203 13:50:51.494757 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:50:51.523445 master-0 kubenswrapper[4808]: I1203 13:50:51.523254 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:50:51.529552 master-0 kubenswrapper[4808]: I1203 13:50:51.529515 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:50:51.656075 master-0 kubenswrapper[4808]: I1203 13:50:51.655862 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:51.657703 master-0 kubenswrapper[4808]: I1203 13:50:51.657655 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:51.657771 master-0 kubenswrapper[4808]: I1203 13:50:51.657714 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:51.657806 master-0 kubenswrapper[4808]: I1203 13:50:51.657786 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:51.657909 master-0 kubenswrapper[4808]: I1203 13:50:51.657876 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:51.659330 master-0 kubenswrapper[4808]: E1203 13:50:51.659269 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:51.828359 master-0 kubenswrapper[4808]: I1203 13:50:51.828289 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.073403 master-0 kubenswrapper[4808]: W1203 13:50:52.073248 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.073403 master-0 kubenswrapper[4808]: E1203 13:50:52.073390 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:52.241634 master-0 kubenswrapper[4808]: W1203 13:50:52.241321 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.241634 master-0 kubenswrapper[4808]: E1203 13:50:52.241573 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:52.242492 master-0 kubenswrapper[4808]: E1203 13:50:52.242426 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 13:50:52.265656 master-0 kubenswrapper[4808]: W1203 13:50:52.265574 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.265735 master-0 kubenswrapper[4808]: E1203 13:50:52.265665 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:52.309189 master-0 kubenswrapper[4808]: W1203 13:50:52.309047 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bce50c457ac1f4721bc81a570dd238a.slice/crio-69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677 WatchSource:0}: Error finding container 69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677: Status 404 returned error can't find the container with id 69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677 Dec 03 13:50:52.330944 master-0 kubenswrapper[4808]: I1203 13:50:52.330876 4808 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:50:52.347187 master-0 kubenswrapper[4808]: W1203 13:50:52.347109 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd78739a7694769882b7e47ea5ac08a10.slice/crio-0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d WatchSource:0}: Error finding container 0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d: Status 404 returned error can't find the container with id 0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d Dec 03 13:50:52.369168 master-0 kubenswrapper[4808]: E1203 13:50:52.368914 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db8d444ba14bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,LastTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:50:52.377777 master-0 kubenswrapper[4808]: W1203 13:50:52.377689 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13238af3704fe583f617f61e755cf4c2.slice/crio-27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95 WatchSource:0}: Error finding container 27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95: Status 404 returned error can't find the container with id 27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95 Dec 03 13:50:52.410168 master-0 kubenswrapper[4808]: W1203 13:50:52.410083 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41b95a38663dd6fe34e183818a475977.slice/crio-1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09 WatchSource:0}: Error finding container 1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09: Status 404 returned error can't find the container with id 1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09 Dec 03 13:50:52.459868 master-0 kubenswrapper[4808]: I1203 13:50:52.459747 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:52.461920 master-0 kubenswrapper[4808]: I1203 13:50:52.461841 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:52.461920 master-0 kubenswrapper[4808]: I1203 13:50:52.461923 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:52.462080 master-0 kubenswrapper[4808]: I1203 13:50:52.461937 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:52.462138 master-0 kubenswrapper[4808]: I1203 13:50:52.462118 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:52.463551 master-0 kubenswrapper[4808]: E1203 13:50:52.463464 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:52.499089 master-0 kubenswrapper[4808]: W1203 13:50:52.498985 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb495b0c38f2c54e7cc46282c5f92aab5.slice/crio-4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d WatchSource:0}: Error finding container 4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d: Status 404 returned error can't find the container with id 4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d Dec 03 13:50:52.602931 master-0 kubenswrapper[4808]: W1203 13:50:52.602776 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.602931 master-0 kubenswrapper[4808]: E1203 13:50:52.602900 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:52.827864 master-0 kubenswrapper[4808]: I1203 13:50:52.827790 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:52.832050 master-0 kubenswrapper[4808]: I1203 13:50:52.832005 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 13:50:52.833429 master-0 kubenswrapper[4808]: E1203 13:50:52.833366 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:53.013070 master-0 kubenswrapper[4808]: I1203 13:50:53.012839 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d"} Dec 03 13:50:53.014069 master-0 kubenswrapper[4808]: I1203 13:50:53.013984 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09"} Dec 03 13:50:53.015427 master-0 kubenswrapper[4808]: I1203 13:50:53.015355 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95"} Dec 03 13:50:53.016343 master-0 kubenswrapper[4808]: I1203 13:50:53.016303 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d"} Dec 03 13:50:53.017413 master-0 kubenswrapper[4808]: I1203 13:50:53.017364 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677"} Dec 03 13:50:53.828847 master-0 kubenswrapper[4808]: I1203 13:50:53.828758 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:53.850642 master-0 kubenswrapper[4808]: E1203 13:50:53.850518 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Dec 03 13:50:54.064001 master-0 kubenswrapper[4808]: I1203 13:50:54.063923 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:54.065552 master-0 kubenswrapper[4808]: I1203 13:50:54.065501 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:54.065636 master-0 kubenswrapper[4808]: I1203 13:50:54.065576 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:54.065636 master-0 kubenswrapper[4808]: I1203 13:50:54.065589 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:54.065704 master-0 kubenswrapper[4808]: I1203 13:50:54.065676 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:54.067511 master-0 kubenswrapper[4808]: E1203 13:50:54.067469 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:54.346750 master-0 kubenswrapper[4808]: W1203 13:50:54.346683 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:54.347004 master-0 kubenswrapper[4808]: E1203 13:50:54.346774 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:54.484722 master-0 kubenswrapper[4808]: W1203 13:50:54.484523 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:54.484722 master-0 kubenswrapper[4808]: E1203 13:50:54.484713 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:54.608735 master-0 kubenswrapper[4808]: W1203 13:50:54.608280 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:54.608735 master-0 kubenswrapper[4808]: E1203 13:50:54.608380 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:54.654048 master-0 kubenswrapper[4808]: W1203 13:50:54.653965 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:54.654333 master-0 kubenswrapper[4808]: E1203 13:50:54.654051 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:54.828727 master-0 kubenswrapper[4808]: I1203 13:50:54.828647 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:55.829439 master-0 kubenswrapper[4808]: I1203 13:50:55.829338 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:56.831007 master-0 kubenswrapper[4808]: I1203 13:50:56.830921 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:57.052159 master-0 kubenswrapper[4808]: E1203 13:50:57.052031 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Dec 03 13:50:57.145402 master-0 kubenswrapper[4808]: I1203 13:50:57.145207 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 13:50:57.148069 master-0 kubenswrapper[4808]: E1203 13:50:57.147953 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:57.268135 master-0 kubenswrapper[4808]: I1203 13:50:57.268063 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:50:57.269622 master-0 kubenswrapper[4808]: I1203 13:50:57.269584 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:50:57.269756 master-0 kubenswrapper[4808]: I1203 13:50:57.269636 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:50:57.269756 master-0 kubenswrapper[4808]: I1203 13:50:57.269651 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:50:57.269867 master-0 kubenswrapper[4808]: I1203 13:50:57.269857 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:50:57.271026 master-0 kubenswrapper[4808]: E1203 13:50:57.270949 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:50:57.828197 master-0 kubenswrapper[4808]: I1203 13:50:57.828110 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:58.050630 master-0 kubenswrapper[4808]: W1203 13:50:58.050511 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:58.051357 master-0 kubenswrapper[4808]: E1203 13:50:58.050614 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:50:58.828231 master-0 kubenswrapper[4808]: I1203 13:50:58.828113 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:50:59.828685 master-0 kubenswrapper[4808]: I1203 13:50:59.828512 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:00.424473 master-0 kubenswrapper[4808]: W1203 13:51:00.424247 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:00.424473 master-0 kubenswrapper[4808]: E1203 13:51:00.424348 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:00.478720 master-0 kubenswrapper[4808]: W1203 13:51:00.478557 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:00.478720 master-0 kubenswrapper[4808]: E1203 13:51:00.478644 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:00.545594 master-0 kubenswrapper[4808]: W1203 13:51:00.545335 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:00.545594 master-0 kubenswrapper[4808]: E1203 13:51:00.545477 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:00.828923 master-0 kubenswrapper[4808]: I1203 13:51:00.828680 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:00.950128 master-0 kubenswrapper[4808]: E1203 13:51:00.949942 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:51:01.829516 master-0 kubenswrapper[4808]: I1203 13:51:01.829221 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:02.371985 master-0 kubenswrapper[4808]: E1203 13:51:02.371652 4808 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db8d444ba14bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,LastTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:02.828487 master-0 kubenswrapper[4808]: I1203 13:51:02.828361 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:03.453460 master-0 kubenswrapper[4808]: E1203 13:51:03.453404 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Dec 03 13:51:03.672110 master-0 kubenswrapper[4808]: I1203 13:51:03.671801 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:03.673584 master-0 kubenswrapper[4808]: I1203 13:51:03.673533 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:03.673584 master-0 kubenswrapper[4808]: I1203 13:51:03.673600 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:03.673946 master-0 kubenswrapper[4808]: I1203 13:51:03.673619 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:03.673946 master-0 kubenswrapper[4808]: I1203 13:51:03.673715 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:03.675103 master-0 kubenswrapper[4808]: E1203 13:51:03.675016 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:51:03.828885 master-0 kubenswrapper[4808]: I1203 13:51:03.828743 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:04.829059 master-0 kubenswrapper[4808]: I1203 13:51:04.828938 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:05.824116 master-0 kubenswrapper[4808]: I1203 13:51:05.823865 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 13:51:05.826234 master-0 kubenswrapper[4808]: E1203 13:51:05.826180 4808 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:05.827983 master-0 kubenswrapper[4808]: I1203 13:51:05.827909 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:06.244803 master-0 kubenswrapper[4808]: W1203 13:51:06.244453 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:06.244803 master-0 kubenswrapper[4808]: E1203 13:51:06.244596 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:06.828472 master-0 kubenswrapper[4808]: I1203 13:51:06.828368 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:07.828646 master-0 kubenswrapper[4808]: I1203 13:51:07.828488 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:08.828701 master-0 kubenswrapper[4808]: I1203 13:51:08.828529 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:09.828842 master-0 kubenswrapper[4808]: I1203 13:51:09.828413 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:10.060563 master-0 kubenswrapper[4808]: I1203 13:51:10.060415 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c"} Dec 03 13:51:10.061745 master-0 kubenswrapper[4808]: I1203 13:51:10.061701 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"51d215fc84560f1f6ad187305809ecedf73402cb7d8d1d69a0d33aa56e548bef"} Dec 03 13:51:10.114156 master-0 kubenswrapper[4808]: W1203 13:51:10.114023 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:10.114156 master-0 kubenswrapper[4808]: E1203 13:51:10.114149 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:10.455431 master-0 kubenswrapper[4808]: E1203 13:51:10.455296 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Dec 03 13:51:10.675502 master-0 kubenswrapper[4808]: I1203 13:51:10.675223 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:10.677072 master-0 kubenswrapper[4808]: I1203 13:51:10.677019 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:10.677072 master-0 kubenswrapper[4808]: I1203 13:51:10.677065 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:10.677072 master-0 kubenswrapper[4808]: I1203 13:51:10.677080 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:10.677693 master-0 kubenswrapper[4808]: I1203 13:51:10.677163 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:10.678408 master-0 kubenswrapper[4808]: E1203 13:51:10.678331 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:51:10.828835 master-0 kubenswrapper[4808]: I1203 13:51:10.828674 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:10.897951 master-0 kubenswrapper[4808]: W1203 13:51:10.897845 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:51:10.897951 master-0 kubenswrapper[4808]: E1203 13:51:10.897960 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:51:10.951428 master-0 kubenswrapper[4808]: E1203 13:51:10.951232 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:51:11.066363 master-0 kubenswrapper[4808]: I1203 13:51:11.066288 4808 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c" exitCode=0 Dec 03 13:51:11.066607 master-0 kubenswrapper[4808]: I1203 13:51:11.066419 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c"} Dec 03 13:51:11.066607 master-0 kubenswrapper[4808]: I1203 13:51:11.066475 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:11.067652 master-0 kubenswrapper[4808]: I1203 13:51:11.067617 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:11.067718 master-0 kubenswrapper[4808]: I1203 13:51:11.067660 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:11.067718 master-0 kubenswrapper[4808]: I1203 13:51:11.067675 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:11.068947 master-0 kubenswrapper[4808]: I1203 13:51:11.068664 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"d411a9d4993d118dc0e255c06261c1eb2d14f7c6ba1e4128eeb20ef007aba795"} Dec 03 13:51:11.068947 master-0 kubenswrapper[4808]: I1203 13:51:11.068702 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"886fbb171cc796081daa33c863e0ffd8e881f69d0055d5d49edec8b6ff9d962d"} Dec 03 13:51:11.068947 master-0 kubenswrapper[4808]: I1203 13:51:11.068717 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:11.069502 master-0 kubenswrapper[4808]: I1203 13:51:11.069474 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:11.069577 master-0 kubenswrapper[4808]: I1203 13:51:11.069508 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:11.069577 master-0 kubenswrapper[4808]: I1203 13:51:11.069522 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:11.070410 master-0 kubenswrapper[4808]: I1203 13:51:11.070374 4808 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3" exitCode=0 Dec 03 13:51:11.070464 master-0 kubenswrapper[4808]: I1203 13:51:11.070430 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:11.070505 master-0 kubenswrapper[4808]: I1203 13:51:11.070465 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerDied","Data":"23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3"} Dec 03 13:51:11.071337 master-0 kubenswrapper[4808]: I1203 13:51:11.071318 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:11.071394 master-0 kubenswrapper[4808]: I1203 13:51:11.071348 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:11.071394 master-0 kubenswrapper[4808]: I1203 13:51:11.071361 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:11.072603 master-0 kubenswrapper[4808]: I1203 13:51:11.072567 4808 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="928d610ce063d48a64cbc885d60fb997c89243d56cfbef517a5cbef004ed9c17" exitCode=1 Dec 03 13:51:11.072699 master-0 kubenswrapper[4808]: I1203 13:51:11.072682 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:11.072783 master-0 kubenswrapper[4808]: I1203 13:51:11.072706 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"928d610ce063d48a64cbc885d60fb997c89243d56cfbef517a5cbef004ed9c17"} Dec 03 13:51:11.073468 master-0 kubenswrapper[4808]: I1203 13:51:11.073446 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:11.073529 master-0 kubenswrapper[4808]: I1203 13:51:11.073475 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:11.073529 master-0 kubenswrapper[4808]: I1203 13:51:11.073489 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:11.073800 master-0 kubenswrapper[4808]: I1203 13:51:11.073782 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:11.074465 master-0 kubenswrapper[4808]: I1203 13:51:11.074438 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:11.074526 master-0 kubenswrapper[4808]: I1203 13:51:11.074472 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:11.074526 master-0 kubenswrapper[4808]: I1203 13:51:11.074486 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:12.078381 master-0 kubenswrapper[4808]: I1203 13:51:12.078223 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/0.log" Dec 03 13:51:12.079442 master-0 kubenswrapper[4808]: I1203 13:51:12.079352 4808 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="431e490768142b54ddfbb7ba9e7fb3ac9e99f18a8e9214c7c713f8cfcc1b50be" exitCode=1 Dec 03 13:51:12.079442 master-0 kubenswrapper[4808]: I1203 13:51:12.079416 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"431e490768142b54ddfbb7ba9e7fb3ac9e99f18a8e9214c7c713f8cfcc1b50be"} Dec 03 13:51:12.079558 master-0 kubenswrapper[4808]: I1203 13:51:12.079488 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:12.081198 master-0 kubenswrapper[4808]: I1203 13:51:12.081163 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:12.081285 master-0 kubenswrapper[4808]: I1203 13:51:12.081204 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:12.081285 master-0 kubenswrapper[4808]: I1203 13:51:12.081219 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:12.081612 master-0 kubenswrapper[4808]: I1203 13:51:12.081591 4808 scope.go:117] "RemoveContainer" containerID="431e490768142b54ddfbb7ba9e7fb3ac9e99f18a8e9214c7c713f8cfcc1b50be" Dec 03 13:51:12.082032 master-0 kubenswrapper[4808]: I1203 13:51:12.082001 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:12.082166 master-0 kubenswrapper[4808]: I1203 13:51:12.082104 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"d559032002ae450f2dcc5a6551686ae528fbdc12019934f45dbbd1835ac0a064"} Dec 03 13:51:12.082588 master-0 kubenswrapper[4808]: I1203 13:51:12.082561 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:12.082652 master-0 kubenswrapper[4808]: I1203 13:51:12.082595 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:12.082652 master-0 kubenswrapper[4808]: I1203 13:51:12.082608 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:13.257485 master-0 kubenswrapper[4808]: W1203 13:51:13.251604 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 03 13:51:13.257485 master-0 kubenswrapper[4808]: E1203 13:51:13.251721 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 03 13:51:13.257485 master-0 kubenswrapper[4808]: I1203 13:51:13.251894 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:13.257485 master-0 kubenswrapper[4808]: E1203 13:51:13.252021 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d444ba14bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,LastTimestamp:2025-12-03 13:50:50.824725692 +0000 UTC m=+0.265023627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.275440 master-0 kubenswrapper[4808]: E1203 13:51:13.273571 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.295637 master-0 kubenswrapper[4808]: E1203 13:51:13.294502 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.301776 master-0 kubenswrapper[4808]: E1203 13:51:13.301476 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.308507 master-0 kubenswrapper[4808]: E1203 13:51:13.308327 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44c49244d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.951541837 +0000 UTC m=+0.391839782,LastTimestamp:2025-12-03 13:50:50.951541837 +0000 UTC m=+0.391839782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.316826 master-0 kubenswrapper[4808]: E1203 13:51:13.316535 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.049990077 +0000 UTC m=+0.490288012,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.323462 master-0 kubenswrapper[4808]: E1203 13:51:13.323001 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.050016018 +0000 UTC m=+0.490313953,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.332004 master-0 kubenswrapper[4808]: E1203 13:51:13.331828 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.050035778 +0000 UTC m=+0.490333713,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.339784 master-0 kubenswrapper[4808]: E1203 13:51:13.339582 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.107061686 +0000 UTC m=+0.547359621,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.345676 master-0 kubenswrapper[4808]: E1203 13:51:13.345531 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.107081837 +0000 UTC m=+0.547379772,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.350112 master-0 kubenswrapper[4808]: E1203 13:51:13.349971 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.107090247 +0000 UTC m=+0.547388182,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.355400 master-0 kubenswrapper[4808]: E1203 13:51:13.355232 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.108626185 +0000 UTC m=+0.548924120,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.360887 master-0 kubenswrapper[4808]: E1203 13:51:13.360711 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.108649136 +0000 UTC m=+0.548947071,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.366858 master-0 kubenswrapper[4808]: E1203 13:51:13.366692 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.108665597 +0000 UTC m=+0.548963532,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.374100 master-0 kubenswrapper[4808]: E1203 13:51:13.373950 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.109138291 +0000 UTC m=+0.549436226,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.380018 master-0 kubenswrapper[4808]: E1203 13:51:13.379741 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.109149362 +0000 UTC m=+0.549447297,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.386001 master-0 kubenswrapper[4808]: E1203 13:51:13.385778 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.109157582 +0000 UTC m=+0.549455517,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.392284 master-0 kubenswrapper[4808]: E1203 13:51:13.392099 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.10973537 +0000 UTC m=+0.550033305,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.399489 master-0 kubenswrapper[4808]: E1203 13:51:13.399326 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.10975135 +0000 UTC m=+0.550049285,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.405333 master-0 kubenswrapper[4808]: E1203 13:51:13.405141 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.109760591 +0000 UTC m=+0.550058526,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.411323 master-0 kubenswrapper[4808]: E1203 13:51:13.411096 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.1100754 +0000 UTC m=+0.550373335,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.417548 master-0 kubenswrapper[4808]: E1203 13:51:13.417349 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.110095341 +0000 UTC m=+0.550393266,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.423028 master-0 kubenswrapper[4808]: E1203 13:51:13.422907 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773eed6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773eed6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870460118 +0000 UTC m=+0.310758053,LastTimestamp:2025-12-03 13:50:51.110105581 +0000 UTC m=+0.550403506,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.428128 master-0 kubenswrapper[4808]: E1203 13:51:13.427845 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d447734e5d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d447734e5d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870419037 +0000 UTC m=+0.310716982,LastTimestamp:2025-12-03 13:50:51.111496765 +0000 UTC m=+0.551794700,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.433192 master-0 kubenswrapper[4808]: E1203 13:51:13.433046 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.187db8d44773c212\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.187db8d44773c212 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:50.870448658 +0000 UTC m=+0.310746593,LastTimestamp:2025-12-03 13:50:51.111549616 +0000 UTC m=+0.551847561,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.473101 master-0 kubenswrapper[4808]: E1203 13:51:13.472599 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d49e7dbc7b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:52.330720379 +0000 UTC m=+1.771018334,LastTimestamp:2025-12-03 13:50:52.330720379 +0000 UTC m=+1.771018334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.485908 master-0 kubenswrapper[4808]: E1203 13:51:13.485005 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.187db8d49f9c93f4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:d78739a7694769882b7e47ea5ac08a10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:52.349518836 +0000 UTC m=+1.789816771,LastTimestamp:2025-12-03 13:50:52.349518836 +0000 UTC m=+1.789816771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.492283 master-0 kubenswrapper[4808]: E1203 13:51:13.492006 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d4a178f912 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:52.380739858 +0000 UTC m=+1.821037793,LastTimestamp:2025-12-03 13:50:52.380739858 +0000 UTC m=+1.821037793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.498776 master-0 kubenswrapper[4808]: E1203 13:51:13.497998 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d4a35dcbfa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:52.412513274 +0000 UTC m=+1.852811209,LastTimestamp:2025-12-03 13:50:52.412513274 +0000 UTC m=+1.852811209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.505328 master-0 kubenswrapper[4808]: E1203 13:51:13.505098 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d4a8b639ca openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:50:52.502194634 +0000 UTC m=+1.942492569,LastTimestamp:2025-12-03 13:50:52.502194634 +0000 UTC m=+1.942492569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.529132 master-0 kubenswrapper[4808]: E1203 13:51:13.524641 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d734d2e0a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\" in 10.94s (10.94s including waiting). Image size: 459566623 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:03.442817187 +0000 UTC m=+12.883115122,LastTimestamp:2025-12-03 13:51:03.442817187 +0000 UTC m=+12.883115122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.536534 master-0 kubenswrapper[4808]: E1203 13:51:13.536006 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.187db8d876d5f6f7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:d78739a7694769882b7e47ea5ac08a10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\" in 16.495s (16.495s including waiting). Image size: 938321573 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:08.845283063 +0000 UTC m=+18.285580998,LastTimestamp:2025-12-03 13:51:08.845283063 +0000 UTC m=+18.285580998,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.543294 master-0 kubenswrapper[4808]: E1203 13:51:13.542973 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d876d9c61c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\" in 16.432s (16.432s including waiting). Image size: 532668041 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:08.8455327 +0000 UTC m=+18.285830635,LastTimestamp:2025-12-03 13:51:08.8455327 +0000 UTC m=+18.285830635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.549545 master-0 kubenswrapper[4808]: E1203 13:51:13.549033 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d878e2f4ff kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\" in 16.548s (16.548s including waiting). Image size: 938321573 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:08.879688959 +0000 UTC m=+18.319986894,LastTimestamp:2025-12-03 13:51:08.879688959 +0000 UTC m=+18.319986894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.554004 master-0 kubenswrapper[4808]: E1203 13:51:13.553823 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d87dacbdd6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\" in 16.579s (16.579s including waiting). Image size: 938321573 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:08.960021974 +0000 UTC m=+18.400319909,LastTimestamp:2025-12-03 13:51:08.960021974 +0000 UTC m=+18.400319909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.560623 master-0 kubenswrapper[4808]: E1203 13:51:13.560349 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.187db8d8aee28753 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:d78739a7694769882b7e47ea5ac08a10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:09.785630547 +0000 UTC m=+19.225928492,LastTimestamp:2025-12-03 13:51:09.785630547 +0000 UTC m=+19.225928492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.567523 master-0 kubenswrapper[4808]: E1203 13:51:13.567321 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d8b3bb8887 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:09.866961031 +0000 UTC m=+19.307258976,LastTimestamp:2025-12-03 13:51:09.866961031 +0000 UTC m=+19.307258976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.573596 master-0 kubenswrapper[4808]: E1203 13:51:13.573411 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d8c3a3737a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.133818234 +0000 UTC m=+19.574116189,LastTimestamp:2025-12-03 13:51:10.133818234 +0000 UTC m=+19.574116189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.579577 master-0 kubenswrapper[4808]: E1203 13:51:13.579420 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d8c5784d55 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.164544853 +0000 UTC m=+19.604842828,LastTimestamp:2025-12-03 13:51:10.164544853 +0000 UTC m=+19.604842828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.584776 master-0 kubenswrapper[4808]: E1203 13:51:13.584291 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d8c5906319 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.166123289 +0000 UTC m=+19.606421224,LastTimestamp:2025-12-03 13:51:10.166123289 +0000 UTC m=+19.606421224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.589728 master-0 kubenswrapper[4808]: E1203 13:51:13.589378 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.187db8d8c5973bf6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:d78739a7694769882b7e47ea5ac08a10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.166572022 +0000 UTC m=+19.606869957,LastTimestamp:2025-12-03 13:51:10.166572022 +0000 UTC m=+19.606869957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.594990 master-0 kubenswrapper[4808]: E1203 13:51:13.594824 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d8c63f1b26 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.17757367 +0000 UTC m=+19.617871605,LastTimestamp:2025-12-03 13:51:10.17757367 +0000 UTC m=+19.617871605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.600133 master-0 kubenswrapper[4808]: E1203 13:51:13.600006 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d8c7db0ac7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.204570311 +0000 UTC m=+19.644868246,LastTimestamp:2025-12-03 13:51:10.204570311 +0000 UTC m=+19.644868246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.605234 master-0 kubenswrapper[4808]: E1203 13:51:13.604736 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d8c8012944 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.207068484 +0000 UTC m=+19.647366409,LastTimestamp:2025-12-03 13:51:10.207068484 +0000 UTC m=+19.647366409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.610318 master-0 kubenswrapper[4808]: E1203 13:51:13.610123 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d8c897af3b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.216933179 +0000 UTC m=+19.657231114,LastTimestamp:2025-12-03 13:51:10.216933179 +0000 UTC m=+19.657231114,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.614920 master-0 kubenswrapper[4808]: E1203 13:51:13.614715 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d8c8b0bffb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.218575867 +0000 UTC m=+19.658873802,LastTimestamp:2025-12-03 13:51:10.218575867 +0000 UTC m=+19.658873802,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.620085 master-0 kubenswrapper[4808]: E1203 13:51:13.619431 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d8c8c8d9c4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.220155332 +0000 UTC m=+19.660453267,LastTimestamp:2025-12-03 13:51:10.220155332 +0000 UTC m=+19.660453267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.644877 master-0 kubenswrapper[4808]: E1203 13:51:13.644720 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d8da270e69 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.511541865 +0000 UTC m=+19.951839800,LastTimestamp:2025-12-03 13:51:10.511541865 +0000 UTC m=+19.951839800,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.654046 master-0 kubenswrapper[4808]: E1203 13:51:13.653832 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db8d8dc551e46 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.548115014 +0000 UTC m=+19.988412949,LastTimestamp:2025-12-03 13:51:10.548115014 +0000 UTC m=+19.988412949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.666244 master-0 kubenswrapper[4808]: E1203 13:51:13.666057 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d8fb75bbf3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.070346227 +0000 UTC m=+20.510644162,LastTimestamp:2025-12-03 13:51:11.070346227 +0000 UTC m=+20.510644162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.673167 master-0 kubenswrapper[4808]: E1203 13:51:13.673040 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d8fba933b9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.073719225 +0000 UTC m=+20.514017160,LastTimestamp:2025-12-03 13:51:11.073719225 +0000 UTC m=+20.514017160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.679670 master-0 kubenswrapper[4808]: E1203 13:51:13.679479 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d9087e2fd0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.289003984 +0000 UTC m=+20.729301919,LastTimestamp:2025-12-03 13:51:11.289003984 +0000 UTC m=+20.729301919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.685814 master-0 kubenswrapper[4808]: E1203 13:51:13.685604 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d90896fb43 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.290628931 +0000 UTC m=+20.730926866,LastTimestamp:2025-12-03 13:51:11.290628931 +0000 UTC m=+20.730926866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.690480 master-0 kubenswrapper[4808]: E1203 13:51:13.690356 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d909454604 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.302051332 +0000 UTC m=+20.742349267,LastTimestamp:2025-12-03 13:51:11.302051332 +0000 UTC m=+20.742349267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.696840 master-0 kubenswrapper[4808]: E1203 13:51:13.696638 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8d9095e6e65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.303700069 +0000 UTC m=+20.743998004,LastTimestamp:2025-12-03 13:51:11.303700069 +0000 UTC m=+20.743998004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.702785 master-0 kubenswrapper[4808]: E1203 13:51:13.702578 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9096d5d5c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.304678748 +0000 UTC m=+20.744976673,LastTimestamp:2025-12-03 13:51:11.304678748 +0000 UTC m=+20.744976673,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.709317 master-0 kubenswrapper[4808]: E1203 13:51:13.709034 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d8fb75bbf3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d8fb75bbf3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.070346227 +0000 UTC m=+20.510644162,LastTimestamp:2025-12-03 13:51:12.963732021 +0000 UTC m=+22.404029956,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.716014 master-0 kubenswrapper[4808]: E1203 13:51:13.715797 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d96f76ab46 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\" in 2.796s (2.796s including waiting). Image size: 499719811 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:13.01656455 +0000 UTC m=+22.456862485,LastTimestamp:2025-12-03 13:51:13.01656455 +0000 UTC m=+22.456862485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.722220 master-0 kubenswrapper[4808]: E1203 13:51:13.722021 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d90896fb43\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d90896fb43 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.290628931 +0000 UTC m=+20.730926866,LastTimestamp:2025-12-03 13:51:13.248466051 +0000 UTC m=+22.688763986,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.728553 master-0 kubenswrapper[4808]: E1203 13:51:13.728171 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d97e3564af kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:13.263944879 +0000 UTC m=+22.704242814,LastTimestamp:2025-12-03 13:51:13.263944879 +0000 UTC m=+22.704242814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.734918 master-0 kubenswrapper[4808]: E1203 13:51:13.734694 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d9096d5d5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9096d5d5c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.304678748 +0000 UTC m=+20.744976673,LastTimestamp:2025-12-03 13:51:13.274476324 +0000 UTC m=+22.714774259,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.740113 master-0 kubenswrapper[4808]: E1203 13:51:13.740026 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d9847e7867 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:13.369397351 +0000 UTC m=+22.809695296,LastTimestamp:2025-12-03 13:51:13.369397351 +0000 UTC m=+22.809695296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:13.832684 master-0 kubenswrapper[4808]: I1203 13:51:13.832631 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:14.097897 master-0 kubenswrapper[4808]: I1203 13:51:14.097832 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c"} Dec 03 13:51:14.098152 master-0 kubenswrapper[4808]: I1203 13:51:14.097979 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:14.098845 master-0 kubenswrapper[4808]: I1203 13:51:14.098779 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:14.098845 master-0 kubenswrapper[4808]: I1203 13:51:14.098803 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:14.098845 master-0 kubenswrapper[4808]: I1203 13:51:14.098812 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:14.099240 master-0 kubenswrapper[4808]: I1203 13:51:14.099059 4808 scope.go:117] "RemoveContainer" containerID="928d610ce063d48a64cbc885d60fb997c89243d56cfbef517a5cbef004ed9c17" Dec 03 13:51:14.099778 master-0 kubenswrapper[4808]: I1203 13:51:14.099723 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/1.log" Dec 03 13:51:14.100290 master-0 kubenswrapper[4808]: I1203 13:51:14.100234 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/0.log" Dec 03 13:51:14.101008 master-0 kubenswrapper[4808]: I1203 13:51:14.100703 4808 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e" exitCode=1 Dec 03 13:51:14.101008 master-0 kubenswrapper[4808]: I1203 13:51:14.100759 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e"} Dec 03 13:51:14.101008 master-0 kubenswrapper[4808]: I1203 13:51:14.100808 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:14.101008 master-0 kubenswrapper[4808]: I1203 13:51:14.100883 4808 scope.go:117] "RemoveContainer" containerID="431e490768142b54ddfbb7ba9e7fb3ac9e99f18a8e9214c7c713f8cfcc1b50be" Dec 03 13:51:14.102233 master-0 kubenswrapper[4808]: I1203 13:51:14.101735 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:14.102233 master-0 kubenswrapper[4808]: I1203 13:51:14.101758 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:14.102233 master-0 kubenswrapper[4808]: I1203 13:51:14.101768 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:14.102233 master-0 kubenswrapper[4808]: I1203 13:51:14.101989 4808 scope.go:117] "RemoveContainer" containerID="05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e" Dec 03 13:51:14.102233 master-0 kubenswrapper[4808]: E1203 13:51:14.102190 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b495b0c38f2c54e7cc46282c5f92aab5" Dec 03 13:51:14.108402 master-0 kubenswrapper[4808]: E1203 13:51:14.108161 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9b02a9553 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:14.102097235 +0000 UTC m=+23.542395170,LastTimestamp:2025-12-03 13:51:14.102097235 +0000 UTC m=+23.542395170,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:14.832491 master-0 kubenswrapper[4808]: I1203 13:51:14.832316 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:14.858450 master-0 kubenswrapper[4808]: I1203 13:51:14.858288 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:14.865195 master-0 kubenswrapper[4808]: I1203 13:51:14.865125 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:15.104301 master-0 kubenswrapper[4808]: I1203 13:51:15.104099 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:15.105612 master-0 kubenswrapper[4808]: I1203 13:51:15.105551 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:15.105612 master-0 kubenswrapper[4808]: I1203 13:51:15.105599 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:15.105612 master-0 kubenswrapper[4808]: I1203 13:51:15.105609 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:15.106125 master-0 kubenswrapper[4808]: I1203 13:51:15.106026 4808 scope.go:117] "RemoveContainer" containerID="05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e" Dec 03 13:51:15.106424 master-0 kubenswrapper[4808]: E1203 13:51:15.106220 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b495b0c38f2c54e7cc46282c5f92aab5" Dec 03 13:51:15.113567 master-0 kubenswrapper[4808]: E1203 13:51:15.113378 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d9b02a9553\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9b02a9553 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:14.102097235 +0000 UTC m=+23.542395170,LastTimestamp:2025-12-03 13:51:15.106187823 +0000 UTC m=+24.546485758,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.223578 master-0 kubenswrapper[4808]: I1203 13:51:15.223449 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:15.472204 master-0 kubenswrapper[4808]: E1203 13:51:15.471776 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8da01644d08 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:15.464834312 +0000 UTC m=+24.905132247,LastTimestamp:2025-12-03 13:51:15.464834312 +0000 UTC m=+24.905132247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.591786 master-0 kubenswrapper[4808]: E1203 13:51:15.591398 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8da088094e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\" in 4.28s (4.28s including waiting). Image size: 509451797 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:15.584128225 +0000 UTC m=+25.024426160,LastTimestamp:2025-12-03 13:51:15.584128225 +0000 UTC m=+25.024426160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.667344 master-0 kubenswrapper[4808]: E1203 13:51:15.667122 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.187db8d8c5906319\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d8c5906319 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.166123289 +0000 UTC m=+19.606421224,LastTimestamp:2025-12-03 13:51:15.660494285 +0000 UTC m=+25.100792220,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.677970 master-0 kubenswrapper[4808]: E1203 13:51:15.677774 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.187db8d8c8b0bffb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.187db8d8c8b0bffb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:7bce50c457ac1f4721bc81a570dd238a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:10.218575867 +0000 UTC m=+19.658873802,LastTimestamp:2025-12-03 13:51:15.671117842 +0000 UTC m=+25.111415777,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.780095 master-0 kubenswrapper[4808]: E1203 13:51:15.779929 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8da13d453a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:15.774165924 +0000 UTC m=+25.214463859,LastTimestamp:2025-12-03 13:51:15.774165924 +0000 UTC m=+25.214463859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.794844 master-0 kubenswrapper[4808]: E1203 13:51:15.794630 4808 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.187db8da14a8e85b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:13238af3704fe583f617f61e755cf4c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:15.788097627 +0000 UTC m=+25.228395562,LastTimestamp:2025-12-03 13:51:15.788097627 +0000 UTC m=+25.228395562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:15.833622 master-0 kubenswrapper[4808]: I1203 13:51:15.833554 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:16.113651 master-0 kubenswrapper[4808]: I1203 13:51:16.113013 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:16.113651 master-0 kubenswrapper[4808]: I1203 13:51:16.113011 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b"} Dec 03 13:51:16.113651 master-0 kubenswrapper[4808]: I1203 13:51:16.113231 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:16.114652 master-0 kubenswrapper[4808]: I1203 13:51:16.114543 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:16.114652 master-0 kubenswrapper[4808]: I1203 13:51:16.114598 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:16.114652 master-0 kubenswrapper[4808]: I1203 13:51:16.114613 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:16.119705 master-0 kubenswrapper[4808]: I1203 13:51:16.119631 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/1.log" Dec 03 13:51:16.124453 master-0 kubenswrapper[4808]: I1203 13:51:16.124390 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1"} Dec 03 13:51:16.124672 master-0 kubenswrapper[4808]: I1203 13:51:16.124570 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:16.125950 master-0 kubenswrapper[4808]: I1203 13:51:16.125905 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:16.126025 master-0 kubenswrapper[4808]: I1203 13:51:16.125965 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:16.126025 master-0 kubenswrapper[4808]: I1203 13:51:16.125983 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:16.835175 master-0 kubenswrapper[4808]: I1203 13:51:16.835091 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:17.130640 master-0 kubenswrapper[4808]: I1203 13:51:17.128397 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:17.130640 master-0 kubenswrapper[4808]: I1203 13:51:17.128577 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:17.132735 master-0 kubenswrapper[4808]: I1203 13:51:17.131826 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:17.132735 master-0 kubenswrapper[4808]: I1203 13:51:17.131876 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:17.132735 master-0 kubenswrapper[4808]: I1203 13:51:17.131893 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:17.132962 master-0 kubenswrapper[4808]: I1203 13:51:17.132889 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:17.133015 master-0 kubenswrapper[4808]: I1203 13:51:17.132975 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:17.133015 master-0 kubenswrapper[4808]: I1203 13:51:17.132990 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:17.312764 master-0 kubenswrapper[4808]: I1203 13:51:17.312648 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:51:17.462379 master-0 kubenswrapper[4808]: E1203 13:51:17.462154 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 03 13:51:17.678945 master-0 kubenswrapper[4808]: I1203 13:51:17.678870 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:17.680666 master-0 kubenswrapper[4808]: I1203 13:51:17.680610 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:17.680729 master-0 kubenswrapper[4808]: I1203 13:51:17.680678 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:17.680729 master-0 kubenswrapper[4808]: I1203 13:51:17.680692 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:17.680884 master-0 kubenswrapper[4808]: I1203 13:51:17.680768 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:17.685698 master-0 kubenswrapper[4808]: E1203 13:51:17.685650 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Dec 03 13:51:17.832750 master-0 kubenswrapper[4808]: I1203 13:51:17.832669 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:18.060193 master-0 kubenswrapper[4808]: I1203 13:51:18.060092 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:18.064676 master-0 kubenswrapper[4808]: I1203 13:51:18.064625 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:18.130166 master-0 kubenswrapper[4808]: I1203 13:51:18.129770 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:18.130166 master-0 kubenswrapper[4808]: I1203 13:51:18.129860 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:18.130166 master-0 kubenswrapper[4808]: I1203 13:51:18.129970 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.130617 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.130648 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.130664 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.131472 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.131525 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:18.131562 master-0 kubenswrapper[4808]: I1203 13:51:18.131536 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:18.833889 master-0 kubenswrapper[4808]: I1203 13:51:18.833752 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:19.133054 master-0 kubenswrapper[4808]: I1203 13:51:19.132699 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:19.134087 master-0 kubenswrapper[4808]: I1203 13:51:19.133794 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:19.134087 master-0 kubenswrapper[4808]: I1203 13:51:19.133849 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:19.134087 master-0 kubenswrapper[4808]: I1203 13:51:19.133864 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:19.365178 master-0 kubenswrapper[4808]: I1203 13:51:19.365082 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:51:19.365443 master-0 kubenswrapper[4808]: I1203 13:51:19.365345 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:19.366730 master-0 kubenswrapper[4808]: I1203 13:51:19.366668 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:19.366810 master-0 kubenswrapper[4808]: I1203 13:51:19.366739 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:19.366810 master-0 kubenswrapper[4808]: I1203 13:51:19.366754 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:19.371631 master-0 kubenswrapper[4808]: I1203 13:51:19.371546 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:51:19.833929 master-0 kubenswrapper[4808]: I1203 13:51:19.833843 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:20.134388 master-0 kubenswrapper[4808]: I1203 13:51:20.134338 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:20.135405 master-0 kubenswrapper[4808]: I1203 13:51:20.135362 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:20.135464 master-0 kubenswrapper[4808]: I1203 13:51:20.135412 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:20.135464 master-0 kubenswrapper[4808]: I1203 13:51:20.135423 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:20.139572 master-0 kubenswrapper[4808]: I1203 13:51:20.139519 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:51:20.834837 master-0 kubenswrapper[4808]: I1203 13:51:20.834768 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:20.952186 master-0 kubenswrapper[4808]: E1203 13:51:20.952016 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:51:21.139468 master-0 kubenswrapper[4808]: I1203 13:51:21.139377 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:21.140747 master-0 kubenswrapper[4808]: I1203 13:51:21.140686 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:21.140933 master-0 kubenswrapper[4808]: I1203 13:51:21.140754 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:21.140933 master-0 kubenswrapper[4808]: I1203 13:51:21.140768 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:21.833549 master-0 kubenswrapper[4808]: I1203 13:51:21.833473 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:22.543871 master-0 kubenswrapper[4808]: I1203 13:51:22.543759 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 03 13:51:22.583865 master-0 kubenswrapper[4808]: I1203 13:51:22.583804 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 13:51:22.832561 master-0 kubenswrapper[4808]: I1203 13:51:22.832481 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:23.833626 master-0 kubenswrapper[4808]: I1203 13:51:23.833548 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:24.468975 master-0 kubenswrapper[4808]: E1203 13:51:24.468876 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 03 13:51:24.686705 master-0 kubenswrapper[4808]: I1203 13:51:24.686612 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:24.688186 master-0 kubenswrapper[4808]: I1203 13:51:24.688130 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:24.688275 master-0 kubenswrapper[4808]: I1203 13:51:24.688194 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:24.688275 master-0 kubenswrapper[4808]: I1203 13:51:24.688205 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:24.688341 master-0 kubenswrapper[4808]: I1203 13:51:24.688291 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:24.694806 master-0 kubenswrapper[4808]: E1203 13:51:24.694735 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Dec 03 13:51:24.833110 master-0 kubenswrapper[4808]: I1203 13:51:24.833042 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:25.832744 master-0 kubenswrapper[4808]: I1203 13:51:25.832616 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:26.422745 master-0 kubenswrapper[4808]: W1203 13:51:26.422638 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:26.423082 master-0 kubenswrapper[4808]: E1203 13:51:26.422860 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 03 13:51:26.786515 master-0 kubenswrapper[4808]: W1203 13:51:26.786333 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 03 13:51:26.786515 master-0 kubenswrapper[4808]: E1203 13:51:26.786423 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 03 13:51:26.834056 master-0 kubenswrapper[4808]: I1203 13:51:26.833967 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:27.004985 master-0 kubenswrapper[4808]: I1203 13:51:27.004894 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:27.006387 master-0 kubenswrapper[4808]: I1203 13:51:27.006347 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:27.006506 master-0 kubenswrapper[4808]: I1203 13:51:27.006399 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:27.006506 master-0 kubenswrapper[4808]: I1203 13:51:27.006410 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:27.006786 master-0 kubenswrapper[4808]: I1203 13:51:27.006756 4808 scope.go:117] "RemoveContainer" containerID="05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e" Dec 03 13:51:27.013447 master-0 kubenswrapper[4808]: E1203 13:51:27.013285 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d8fb75bbf3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d8fb75bbf3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.070346227 +0000 UTC m=+20.510644162,LastTimestamp:2025-12-03 13:51:27.008125512 +0000 UTC m=+36.448423447,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:27.381207 master-0 kubenswrapper[4808]: E1203 13:51:27.380944 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d90896fb43\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d90896fb43 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.290628931 +0000 UTC m=+20.730926866,LastTimestamp:2025-12-03 13:51:27.374575366 +0000 UTC m=+36.814873301,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:27.632032 master-0 kubenswrapper[4808]: E1203 13:51:27.631676 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d9096d5d5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9096d5d5c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:11.304678748 +0000 UTC m=+20.744976673,LastTimestamp:2025-12-03 13:51:27.620571245 +0000 UTC m=+37.060869190,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:27.834594 master-0 kubenswrapper[4808]: I1203 13:51:27.834528 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:28.160495 master-0 kubenswrapper[4808]: I1203 13:51:28.160438 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 13:51:28.161604 master-0 kubenswrapper[4808]: I1203 13:51:28.161544 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/1.log" Dec 03 13:51:28.163422 master-0 kubenswrapper[4808]: I1203 13:51:28.163350 4808 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" exitCode=1 Dec 03 13:51:28.163422 master-0 kubenswrapper[4808]: I1203 13:51:28.163423 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b"} Dec 03 13:51:28.163812 master-0 kubenswrapper[4808]: I1203 13:51:28.163482 4808 scope.go:117] "RemoveContainer" containerID="05ce110525ab54d19b0693a12bf4ba79839107f7071eb0a6634a069b2693231e" Dec 03 13:51:28.163812 master-0 kubenswrapper[4808]: I1203 13:51:28.163614 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:28.165705 master-0 kubenswrapper[4808]: I1203 13:51:28.164910 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:28.165705 master-0 kubenswrapper[4808]: I1203 13:51:28.165002 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:28.165705 master-0 kubenswrapper[4808]: I1203 13:51:28.165016 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:28.165705 master-0 kubenswrapper[4808]: I1203 13:51:28.165538 4808 scope.go:117] "RemoveContainer" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" Dec 03 13:51:28.166079 master-0 kubenswrapper[4808]: E1203 13:51:28.165820 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b495b0c38f2c54e7cc46282c5f92aab5" Dec 03 13:51:28.172497 master-0 kubenswrapper[4808]: E1203 13:51:28.172249 4808 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.187db8d9b02a9553\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.187db8d9b02a9553 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b495b0c38f2c54e7cc46282c5f92aab5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:51:14.102097235 +0000 UTC m=+23.542395170,LastTimestamp:2025-12-03 13:51:28.165780454 +0000 UTC m=+37.606078389,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:51:28.834596 master-0 kubenswrapper[4808]: I1203 13:51:28.834512 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:29.168194 master-0 kubenswrapper[4808]: I1203 13:51:29.168020 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 13:51:29.833098 master-0 kubenswrapper[4808]: I1203 13:51:29.833003 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:29.904302 master-0 kubenswrapper[4808]: W1203 13:51:29.904219 4808 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Dec 03 13:51:29.904921 master-0 kubenswrapper[4808]: E1203 13:51:29.904346 4808 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 03 13:51:30.832729 master-0 kubenswrapper[4808]: I1203 13:51:30.832661 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:30.909987 master-0 kubenswrapper[4808]: I1203 13:51:30.909868 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:30.909987 master-0 kubenswrapper[4808]: I1203 13:51:30.910019 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:30.911383 master-0 kubenswrapper[4808]: I1203 13:51:30.911317 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:30.911383 master-0 kubenswrapper[4808]: I1203 13:51:30.911356 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:30.911383 master-0 kubenswrapper[4808]: I1203 13:51:30.911368 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:30.915189 master-0 kubenswrapper[4808]: I1203 13:51:30.915130 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:51:30.952673 master-0 kubenswrapper[4808]: E1203 13:51:30.952492 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:51:31.180643 master-0 kubenswrapper[4808]: I1203 13:51:31.180539 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:31.182548 master-0 kubenswrapper[4808]: I1203 13:51:31.182460 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:31.182548 master-0 kubenswrapper[4808]: I1203 13:51:31.182534 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:31.182548 master-0 kubenswrapper[4808]: I1203 13:51:31.182550 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:31.448961 master-0 kubenswrapper[4808]: I1203 13:51:31.448745 4808 csr.go:261] certificate signing request csr-mhmsz is approved, waiting to be issued Dec 03 13:51:31.476582 master-0 kubenswrapper[4808]: E1203 13:51:31.476481 4808 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 03 13:51:31.695688 master-0 kubenswrapper[4808]: I1203 13:51:31.695548 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:31.697309 master-0 kubenswrapper[4808]: I1203 13:51:31.697247 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:31.697309 master-0 kubenswrapper[4808]: I1203 13:51:31.697310 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:31.697420 master-0 kubenswrapper[4808]: I1203 13:51:31.697323 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:31.697420 master-0 kubenswrapper[4808]: I1203 13:51:31.697399 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:31.704236 master-0 kubenswrapper[4808]: E1203 13:51:31.704128 4808 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Dec 03 13:51:31.833025 master-0 kubenswrapper[4808]: I1203 13:51:31.832922 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:32.833481 master-0 kubenswrapper[4808]: I1203 13:51:32.833339 4808 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 03 13:51:33.682452 master-0 kubenswrapper[4808]: I1203 13:51:33.682381 4808 csr.go:257] certificate signing request csr-mhmsz is issued Dec 03 13:51:33.749316 master-0 kubenswrapper[4808]: I1203 13:51:33.749183 4808 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 03 13:51:33.843280 master-0 kubenswrapper[4808]: I1203 13:51:33.843215 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:33.867399 master-0 kubenswrapper[4808]: I1203 13:51:33.867317 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:33.931777 master-0 kubenswrapper[4808]: I1203 13:51:33.931727 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.242030 master-0 kubenswrapper[4808]: I1203 13:51:34.241934 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.242352 master-0 kubenswrapper[4808]: E1203 13:51:34.242096 4808 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Dec 03 13:51:34.264424 master-0 kubenswrapper[4808]: I1203 13:51:34.264332 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.284939 master-0 kubenswrapper[4808]: I1203 13:51:34.284848 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.356516 master-0 kubenswrapper[4808]: I1203 13:51:34.356444 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.538390 master-0 kubenswrapper[4808]: I1203 13:51:34.538221 4808 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 13:51:34.633852 master-0 kubenswrapper[4808]: I1203 13:51:34.633787 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:34.633852 master-0 kubenswrapper[4808]: E1203 13:51:34.633847 4808 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Dec 03 13:51:34.683974 master-0 kubenswrapper[4808]: I1203 13:51:34.683830 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 08:03:50.238158072 +0000 UTC Dec 03 13:51:34.683974 master-0 kubenswrapper[4808]: I1203 13:51:34.683942 4808 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h12m15.554220337s for next certificate rotation Dec 03 13:51:34.841406 master-0 kubenswrapper[4808]: I1203 13:51:34.841253 4808 apiserver.go:52] "Watching apiserver" Dec 03 13:51:35.205552 master-0 kubenswrapper[4808]: I1203 13:51:35.205077 4808 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 13:51:35.205552 master-0 kubenswrapper[4808]: I1203 13:51:35.205404 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=[] Dec 03 13:51:35.254161 master-0 kubenswrapper[4808]: I1203 13:51:35.253940 4808 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 13:51:35.695436 master-0 kubenswrapper[4808]: I1203 13:51:35.695326 4808 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Dec 03 13:51:38.705516 master-0 kubenswrapper[4808]: I1203 13:51:38.705375 4808 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:51:38.707183 master-0 kubenswrapper[4808]: I1203 13:51:38.707121 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:51:38.707302 master-0 kubenswrapper[4808]: I1203 13:51:38.707188 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:51:38.707302 master-0 kubenswrapper[4808]: I1203 13:51:38.707203 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:51:38.707394 master-0 kubenswrapper[4808]: I1203 13:51:38.707314 4808 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:51:40.953401 master-0 kubenswrapper[4808]: E1203 13:51:40.953120 4808 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:51:41.419137 master-0 kubenswrapper[4808]: I1203 13:51:41.419051 4808 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 13:51:41.864152 master-0 kubenswrapper[4808]: I1203 13:51:41.863179 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Dec 03 13:51:41.877464 master-0 kubenswrapper[4808]: I1203 13:51:41.877383 4808 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 03 13:51:43.317720 master-0 kubenswrapper[4808]: I1203 13:51:43.317584 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Dec 03 13:51:43.318449 master-0 kubenswrapper[4808]: I1203 13:51:43.317870 4808 scope.go:117] "RemoveContainer" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" Dec 03 13:51:43.319696 master-0 kubenswrapper[4808]: E1203 13:51:43.318871 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b495b0c38f2c54e7cc46282c5f92aab5" Dec 03 13:51:44.218301 master-0 kubenswrapper[4808]: I1203 13:51:44.218193 4808 scope.go:117] "RemoveContainer" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" Dec 03 13:51:44.218594 master-0 kubenswrapper[4808]: E1203 13:51:44.218468 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b495b0c38f2c54e7cc46282c5f92aab5" Dec 03 13:51:44.605883 master-0 kubenswrapper[4808]: I1203 13:51:44.605791 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-869c786959-vrvwt"] Dec 03 13:51:44.606622 master-0 kubenswrapper[4808]: I1203 13:51:44.606231 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.612500 master-0 kubenswrapper[4808]: I1203 13:51:44.612444 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 13:51:44.613962 master-0 kubenswrapper[4808]: I1203 13:51:44.613901 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 13:51:44.618728 master-0 kubenswrapper[4808]: I1203 13:51:44.618665 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 13:51:44.618858 master-0 kubenswrapper[4808]: I1203 13:51:44.618708 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.618858 master-0 kubenswrapper[4808]: I1203 13:51:44.618764 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.618858 master-0 kubenswrapper[4808]: I1203 13:51:44.618802 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.618966 master-0 kubenswrapper[4808]: I1203 13:51:44.618891 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.618966 master-0 kubenswrapper[4808]: I1203 13:51:44.618931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.719757 master-0 kubenswrapper[4808]: I1203 13:51:44.719561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.719757 master-0 kubenswrapper[4808]: I1203 13:51:44.719664 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.719757 master-0 kubenswrapper[4808]: I1203 13:51:44.719699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.719757 master-0 kubenswrapper[4808]: I1203 13:51:44.719736 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.719757 master-0 kubenswrapper[4808]: I1203 13:51:44.719767 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.720382 master-0 kubenswrapper[4808]: I1203 13:51:44.719865 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.720382 master-0 kubenswrapper[4808]: I1203 13:51:44.719949 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.720382 master-0 kubenswrapper[4808]: E1203 13:51:44.719994 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:44.720382 master-0 kubenswrapper[4808]: E1203 13:51:44.720218 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:51:45.220174901 +0000 UTC m=+54.660473016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:44.721220 master-0 kubenswrapper[4808]: I1203 13:51:44.721162 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.757587 master-0 kubenswrapper[4808]: I1203 13:51:44.757482 4808 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 13:51:44.763086 master-0 kubenswrapper[4808]: I1203 13:51:44.762992 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:44.793235 master-0 kubenswrapper[4808]: I1203 13:51:44.793161 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-6cbf58c977-8lh6n"] Dec 03 13:51:44.793605 master-0 kubenswrapper[4808]: I1203 13:51:44.793590 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:44.796589 master-0 kubenswrapper[4808]: I1203 13:51:44.796453 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 13:51:44.796968 master-0 kubenswrapper[4808]: I1203 13:51:44.796721 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 13:51:44.798296 master-0 kubenswrapper[4808]: I1203 13:51:44.798212 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 13:51:44.921166 master-0 kubenswrapper[4808]: I1203 13:51:44.920889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:44.921166 master-0 kubenswrapper[4808]: I1203 13:51:44.920999 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:44.921166 master-0 kubenswrapper[4808]: I1203 13:51:44.921050 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.021654 master-0 kubenswrapper[4808]: I1203 13:51:45.021563 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.021654 master-0 kubenswrapper[4808]: I1203 13:51:45.021635 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.022042 master-0 kubenswrapper[4808]: I1203 13:51:45.021679 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.022042 master-0 kubenswrapper[4808]: I1203 13:51:45.021805 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.025573 master-0 kubenswrapper[4808]: I1203 13:51:45.025511 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.083002 master-0 kubenswrapper[4808]: I1203 13:51:45.082892 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.108769 master-0 kubenswrapper[4808]: I1203 13:51:45.108653 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:51:45.220693 master-0 kubenswrapper[4808]: I1203 13:51:45.220223 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1"} Dec 03 13:51:45.222649 master-0 kubenswrapper[4808]: I1203 13:51:45.222606 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:45.222866 master-0 kubenswrapper[4808]: E1203 13:51:45.222775 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:45.223111 master-0 kubenswrapper[4808]: E1203 13:51:45.223048 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:51:46.2229944 +0000 UTC m=+55.663292335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:46.232188 master-0 kubenswrapper[4808]: I1203 13:51:46.232010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:46.233415 master-0 kubenswrapper[4808]: E1203 13:51:46.232238 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:46.233415 master-0 kubenswrapper[4808]: E1203 13:51:46.232359 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:51:48.232330365 +0000 UTC m=+57.672628300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:48.245229 master-0 kubenswrapper[4808]: I1203 13:51:48.245029 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:48.245229 master-0 kubenswrapper[4808]: E1203 13:51:48.245290 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:48.246211 master-0 kubenswrapper[4808]: E1203 13:51:48.245378 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:51:52.245354489 +0000 UTC m=+61.685652414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:49.234442 master-0 kubenswrapper[4808]: I1203 13:51:49.234370 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-stq5g"] Dec 03 13:51:49.234784 master-0 kubenswrapper[4808]: I1203 13:51:49.234756 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.236752 master-0 kubenswrapper[4808]: I1203 13:51:49.236672 4808 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Dec 03 13:51:49.238060 master-0 kubenswrapper[4808]: I1203 13:51:49.238003 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Dec 03 13:51:49.238060 master-0 kubenswrapper[4808]: I1203 13:51:49.238016 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Dec 03 13:51:49.238431 master-0 kubenswrapper[4808]: I1203 13:51:49.238379 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Dec 03 13:51:49.351811 master-0 kubenswrapper[4808]: I1203 13:51:49.351684 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.351811 master-0 kubenswrapper[4808]: I1203 13:51:49.351800 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.351811 master-0 kubenswrapper[4808]: I1203 13:51:49.351826 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.351811 master-0 kubenswrapper[4808]: I1203 13:51:49.351847 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spl5n\" (UniqueName: \"kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.352705 master-0 kubenswrapper[4808]: I1203 13:51:49.351870 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453239 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453275 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453324 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453292 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453362 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spl5n\" (UniqueName: \"kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453381 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453365 master-0 kubenswrapper[4808]: I1203 13:51:49.453411 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.453894 master-0 kubenswrapper[4808]: I1203 13:51:49.453392 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.543699 master-0 kubenswrapper[4808]: I1203 13:51:49.543521 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spl5n\" (UniqueName: \"kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n\") pod \"assisted-installer-controller-stq5g\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:49.566497 master-0 kubenswrapper[4808]: I1203 13:51:49.566352 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:51:51.276381 master-0 kubenswrapper[4808]: W1203 13:51:51.275955 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9afa5e14_6832_4650_9401_97359c445e61.slice/crio-eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab WatchSource:0}: Error finding container eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab: Status 404 returned error can't find the container with id eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab Dec 03 13:51:52.239623 master-0 kubenswrapper[4808]: I1203 13:51:52.239466 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-stq5g" event={"ID":"9afa5e14-6832-4650-9401-97359c445e61","Type":"ContainerStarted","Data":"eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab"} Dec 03 13:51:52.276033 master-0 kubenswrapper[4808]: I1203 13:51:52.275902 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: E1203 13:51:52.276089 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,Command:[/bin/bash -c #!/bin/bash Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: set -o allexport Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: source /etc/kubernetes/apiserver-url.env Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: else Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: exit 1 Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: fi Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c16e0847bd9ae0470e9702e5cfb4ccd5551a42ff062bd507f267ed55d1c31b42,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9e597b928c0bdcdebea19f093353a7ada98f5164601abf23aa97f0065c6e293,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc42ec15e3ecccc0942415ec68b27c2c10f53f084b6fa23caa1e81fc70f3629,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4629a2d090ecc0b613a9e6b50601fd2cdb99cb2e511f1fed6d335106f2789baf,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hpt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-6cbf58c977-8lh6n_openshift-network-operator(e97e1725-cb55-4ce3-952d-a4fd0731577d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: > logger="UnhandledError" Dec 03 13:51:52.276191 master-0 kubenswrapper[4808]: E1203 13:51:52.276104 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:52.277776 master-0 kubenswrapper[4808]: E1203 13:51:52.276318 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:00.276292841 +0000 UTC m=+69.716590776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:51:52.277776 master-0 kubenswrapper[4808]: E1203 13:51:52.277556 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" Dec 03 13:51:52.674227 master-0 kubenswrapper[4808]: I1203 13:51:52.673684 4808 csr.go:261] certificate signing request csr-dhgfb is approved, waiting to be issued Dec 03 13:51:52.724608 master-0 kubenswrapper[4808]: I1203 13:51:52.724539 4808 csr.go:257] certificate signing request csr-dhgfb is issued Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: E1203 13:51:53.246915 4808 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,Command:[/bin/bash -c #!/bin/bash Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: set -o allexport Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: source /etc/kubernetes/apiserver-url.env Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: else Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: exit 1 Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: fi Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c16e0847bd9ae0470e9702e5cfb4ccd5551a42ff062bd507f267ed55d1c31b42,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9e597b928c0bdcdebea19f093353a7ada98f5164601abf23aa97f0065c6e293,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2dc42ec15e3ecccc0942415ec68b27c2c10f53f084b6fa23caa1e81fc70f3629,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4629a2d090ecc0b613a9e6b50601fd2cdb99cb2e511f1fed6d335106f2789baf,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hpt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-6cbf58c977-8lh6n_openshift-network-operator(e97e1725-cb55-4ce3-952d-a4fd0731577d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 03 13:51:53.246983 master-0 kubenswrapper[4808]: > logger="UnhandledError" Dec 03 13:51:53.248511 master-0 kubenswrapper[4808]: E1203 13:51:53.248427 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" Dec 03 13:51:53.726784 master-0 kubenswrapper[4808]: I1203 13:51:53.726683 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 07:27:29.382153013 +0000 UTC Dec 03 13:51:53.726784 master-0 kubenswrapper[4808]: I1203 13:51:53.726742 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h35m35.655414093s for next certificate rotation Dec 03 13:51:54.080099 master-0 kubenswrapper[4808]: I1203 13:51:54.080003 4808 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 13:51:54.727357 master-0 kubenswrapper[4808]: I1203 13:51:54.727238 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 06:52:44.546439731 +0000 UTC Dec 03 13:51:54.727357 master-0 kubenswrapper[4808]: I1203 13:51:54.727322 4808 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h0m49.819121401s for next certificate rotation Dec 03 13:51:55.006396 master-0 kubenswrapper[4808]: I1203 13:51:55.005837 4808 scope.go:117] "RemoveContainer" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" Dec 03 13:51:56.146885 master-0 kubenswrapper[4808]: E1203 13:51:56.146785 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spl5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-stq5g_assisted-installer(9afa5e14-6832-4650-9401-97359c445e61): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 03 13:51:56.148297 master-0 kubenswrapper[4808]: E1203 13:51:56.148169 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-stq5g" podUID="9afa5e14-6832-4650-9401-97359c445e61" Dec 03 13:51:56.252674 master-0 kubenswrapper[4808]: I1203 13:51:56.252453 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 13:51:56.253175 master-0 kubenswrapper[4808]: I1203 13:51:56.253117 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02"} Dec 03 13:51:56.257126 master-0 kubenswrapper[4808]: E1203 13:51:56.256890 4808 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:assisted-installer-controller,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLUSTER_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:cluster-id,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:INVENTORY_URL,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:inventory-url,Optional:nil,},SecretKeyRef:nil,},},EnvVar{Name:PULL_SECRET_TOKEN,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-secret,},Key:pull-secret-token,Optional:nil,},},},EnvVar{Name:CA_CERT_PATH,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:ca-cert-path,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:SKIP_CERT_VERIFICATION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:skip-cert-verification,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:OPENSHIFT_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:NOTIFY_NUM_REBOOTS,Value:true,ValueFrom:nil,},EnvVar{Name:HIGH_AVAILABILITY_MODE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:high-availability-mode,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:CHECK_CLUSTER_VERSION,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:check-cluster-version,Optional:*true,},SecretKeyRef:nil,},},EnvVar{Name:MUST_GATHER_IMAGE,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:assisted-installer-controller-config,},Key:must-gather-image,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-ca-bundle,ReadOnly:false,MountPath:/etc/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-run-resolv-conf,ReadOnly:false,MountPath:/tmp/var-run-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-resolv-conf,ReadOnly:false,MountPath:/tmp/host-resolv.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sno-bootstrap-files,ReadOnly:false,MountPath:/tmp/bootstrap-secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-spl5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[KILL MKNOD SETGID SETUID],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000120000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod assisted-installer-controller-stq5g_assisted-installer(9afa5e14-6832-4650-9401-97359c445e61): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 03 13:51:56.259043 master-0 kubenswrapper[4808]: E1203 13:51:56.258910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"assisted-installer-controller\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="assisted-installer/assisted-installer-controller-stq5g" podUID="9afa5e14-6832-4650-9401-97359c445e61" Dec 03 13:51:56.349406 master-0 kubenswrapper[4808]: I1203 13:51:56.349249 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=13.349173065 podStartE2EDuration="13.349173065s" podCreationTimestamp="2025-12-03 13:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:51:56.34902133 +0000 UTC m=+65.789319265" watchObservedRunningTime="2025-12-03 13:51:56.349173065 +0000 UTC m=+65.789471000" Dec 03 13:52:00.330991 master-0 kubenswrapper[4808]: I1203 13:52:00.330801 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:52:00.332180 master-0 kubenswrapper[4808]: E1203 13:52:00.331077 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:00.332180 master-0 kubenswrapper[4808]: E1203 13:52:00.331210 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:16.331176234 +0000 UTC m=+85.771474169 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:05.088311 master-0 kubenswrapper[4808]: I1203 13:52:05.088182 4808 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 13:52:06.072892 master-0 kubenswrapper[4808]: I1203 13:52:06.072827 4808 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 13:52:10.290509 master-0 kubenswrapper[4808]: I1203 13:52:10.290388 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237"} Dec 03 13:52:10.616824 master-0 kubenswrapper[4808]: I1203 13:52:10.616673 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" podStartSLOduration=19.465179455 podStartE2EDuration="26.616640832s" podCreationTimestamp="2025-12-03 13:51:44 +0000 UTC" firstStartedPulling="2025-12-03 13:51:45.124197136 +0000 UTC m=+54.564495071" lastFinishedPulling="2025-12-03 13:51:52.275658503 +0000 UTC m=+61.715956448" observedRunningTime="2025-12-03 13:52:10.616440806 +0000 UTC m=+80.056738751" watchObservedRunningTime="2025-12-03 13:52:10.616640832 +0000 UTC m=+80.056938767" Dec 03 13:52:12.011062 master-0 kubenswrapper[4808]: I1203 13:52:12.010993 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Dec 03 13:52:12.023020 master-0 kubenswrapper[4808]: I1203 13:52:12.022867 4808 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Dec 03 13:52:13.301907 master-0 kubenswrapper[4808]: I1203 13:52:13.301430 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-stq5g" event={"ID":"9afa5e14-6832-4650-9401-97359c445e61","Type":"ContainerStarted","Data":"47a8ddfc7f7b71da4bd36254308448e4c5ee29fcc63f3b852aed944db5125062"} Dec 03 13:52:15.310820 master-0 kubenswrapper[4808]: I1203 13:52:15.310699 4808 generic.go:334] "Generic (PLEG): container finished" podID="9afa5e14-6832-4650-9401-97359c445e61" containerID="47a8ddfc7f7b71da4bd36254308448e4c5ee29fcc63f3b852aed944db5125062" exitCode=0 Dec 03 13:52:15.310820 master-0 kubenswrapper[4808]: I1203 13:52:15.310806 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-stq5g" event={"ID":"9afa5e14-6832-4650-9401-97359c445e61","Type":"ContainerDied","Data":"47a8ddfc7f7b71da4bd36254308448e4c5ee29fcc63f3b852aed944db5125062"} Dec 03 13:52:15.685198 master-0 kubenswrapper[4808]: I1203 13:52:15.685000 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Dec 03 13:52:15.685454 master-0 kubenswrapper[4808]: W1203 13:52:15.685211 4808 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Dec 03 13:52:16.339853 master-0 kubenswrapper[4808]: I1203 13:52:16.339779 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:52:16.409417 master-0 kubenswrapper[4808]: I1203 13:52:16.409231 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf\") pod \"9afa5e14-6832-4650-9401-97359c445e61\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " Dec 03 13:52:16.409417 master-0 kubenswrapper[4808]: I1203 13:52:16.409358 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle\") pod \"9afa5e14-6832-4650-9401-97359c445e61\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " Dec 03 13:52:16.409417 master-0 kubenswrapper[4808]: I1203 13:52:16.409394 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf\") pod \"9afa5e14-6832-4650-9401-97359c445e61\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " Dec 03 13:52:16.409417 master-0 kubenswrapper[4808]: I1203 13:52:16.409430 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spl5n\" (UniqueName: \"kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n\") pod \"9afa5e14-6832-4650-9401-97359c445e61\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: I1203 13:52:16.409464 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files\") pod \"9afa5e14-6832-4650-9401-97359c445e61\" (UID: \"9afa5e14-6832-4650-9401-97359c445e61\") " Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: I1203 13:52:16.409574 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: I1203 13:52:16.409579 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "9afa5e14-6832-4650-9401-97359c445e61" (UID: "9afa5e14-6832-4650-9401-97359c445e61"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: I1203 13:52:16.409641 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "9afa5e14-6832-4650-9401-97359c445e61" (UID: "9afa5e14-6832-4650-9401-97359c445e61"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: I1203 13:52:16.409666 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "9afa5e14-6832-4650-9401-97359c445e61" (UID: "9afa5e14-6832-4650-9401-97359c445e61"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: E1203 13:52:16.409909 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:16.410091 master-0 kubenswrapper[4808]: E1203 13:52:16.410090 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:48.4100314 +0000 UTC m=+117.850329475 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:16.410897 master-0 kubenswrapper[4808]: I1203 13:52:16.410796 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "9afa5e14-6832-4650-9401-97359c445e61" (UID: "9afa5e14-6832-4650-9401-97359c445e61"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:52:16.414852 master-0 kubenswrapper[4808]: I1203 13:52:16.414738 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n" (OuterVolumeSpecName: "kube-api-access-spl5n") pod "9afa5e14-6832-4650-9401-97359c445e61" (UID: "9afa5e14-6832-4650-9401-97359c445e61"). InnerVolumeSpecName "kube-api-access-spl5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:52:16.510653 master-0 kubenswrapper[4808]: I1203 13:52:16.510566 4808 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:16.510653 master-0 kubenswrapper[4808]: I1203 13:52:16.510626 4808 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:16.510653 master-0 kubenswrapper[4808]: I1203 13:52:16.510646 4808 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:16.510653 master-0 kubenswrapper[4808]: I1203 13:52:16.510668 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spl5n\" (UniqueName: \"kubernetes.io/projected/9afa5e14-6832-4650-9401-97359c445e61-kube-api-access-spl5n\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:16.511026 master-0 kubenswrapper[4808]: I1203 13:52:16.510687 4808 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/9afa5e14-6832-4650-9401-97359c445e61-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:17.790019 master-0 kubenswrapper[4808]: I1203 13:52:17.789930 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-stq5g" event={"ID":"9afa5e14-6832-4650-9401-97359c445e61","Type":"ContainerDied","Data":"eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab"} Dec 03 13:52:17.790019 master-0 kubenswrapper[4808]: I1203 13:52:17.790016 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab" Dec 03 13:52:17.790889 master-0 kubenswrapper[4808]: I1203 13:52:17.790085 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:52:18.522575 master-0 kubenswrapper[4808]: I1203 13:52:18.522476 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="assisted-installer/assisted-installer-controller-stq5g" podStartSLOduration=461.65578334 podStartE2EDuration="7m46.522434036s" podCreationTimestamp="2025-12-03 13:44:32 +0000 UTC" firstStartedPulling="2025-12-03 13:51:51.279552203 +0000 UTC m=+60.719850148" lastFinishedPulling="2025-12-03 13:51:56.146202909 +0000 UTC m=+65.586500844" observedRunningTime="2025-12-03 13:52:18.522398145 +0000 UTC m=+87.962696080" watchObservedRunningTime="2025-12-03 13:52:18.522434036 +0000 UTC m=+87.962731971" Dec 03 13:52:19.164813 master-0 kubenswrapper[4808]: I1203 13:52:19.164725 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Dec 03 13:52:19.214573 master-0 kubenswrapper[4808]: I1203 13:52:19.214325 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=5.214299454 podStartE2EDuration="5.214299454s" podCreationTimestamp="2025-12-03 13:52:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:52:19.213566883 +0000 UTC m=+88.653864838" watchObservedRunningTime="2025-12-03 13:52:19.214299454 +0000 UTC m=+88.654597389" Dec 03 13:52:31.541200 master-0 kubenswrapper[4808]: I1203 13:52:31.541000 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=12.540963005 podStartE2EDuration="12.540963005s" podCreationTimestamp="2025-12-03 13:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:52:22.381638892 +0000 UTC m=+91.821936827" watchObservedRunningTime="2025-12-03 13:52:31.540963005 +0000 UTC m=+100.981260970" Dec 03 13:52:31.542728 master-0 kubenswrapper[4808]: I1203 13:52:31.542406 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-jqvnb"] Dec 03 13:52:31.542728 master-0 kubenswrapper[4808]: E1203 13:52:31.542664 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:52:31.542728 master-0 kubenswrapper[4808]: I1203 13:52:31.542700 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:52:31.543099 master-0 kubenswrapper[4808]: I1203 13:52:31.542780 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:52:31.543450 master-0 kubenswrapper[4808]: I1203 13:52:31.543344 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:31.582655 master-0 kubenswrapper[4808]: I1203 13:52:31.582589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxxc\" (UniqueName: \"kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc\") pod \"mtu-prober-jqvnb\" (UID: \"5eae43c1-ef3e-4175-8f95-220e490e3017\") " pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:31.683463 master-0 kubenswrapper[4808]: I1203 13:52:31.683368 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxxc\" (UniqueName: \"kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc\") pod \"mtu-prober-jqvnb\" (UID: \"5eae43c1-ef3e-4175-8f95-220e490e3017\") " pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:32.043637 master-0 kubenswrapper[4808]: I1203 13:52:32.043564 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxxc\" (UniqueName: \"kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc\") pod \"mtu-prober-jqvnb\" (UID: \"5eae43c1-ef3e-4175-8f95-220e490e3017\") " pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:32.157334 master-0 kubenswrapper[4808]: I1203 13:52:32.157217 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:32.847541 master-0 kubenswrapper[4808]: I1203 13:52:32.847478 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqvnb" event={"ID":"5eae43c1-ef3e-4175-8f95-220e490e3017","Type":"ContainerStarted","Data":"3cd671840d59b133f88fb03765cdb68615a01b375fa5cbcc45c53662d0aad8d5"} Dec 03 13:52:33.851922 master-0 kubenswrapper[4808]: I1203 13:52:33.851780 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqvnb" event={"ID":"5eae43c1-ef3e-4175-8f95-220e490e3017","Type":"ContainerStarted","Data":"c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541"} Dec 03 13:52:35.863252 master-0 kubenswrapper[4808]: I1203 13:52:35.863072 4808 generic.go:334] "Generic (PLEG): container finished" podID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerID="c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541" exitCode=0 Dec 03 13:52:35.863252 master-0 kubenswrapper[4808]: I1203 13:52:35.863146 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqvnb" event={"ID":"5eae43c1-ef3e-4175-8f95-220e490e3017","Type":"ContainerDied","Data":"c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541"} Dec 03 13:52:36.887299 master-0 kubenswrapper[4808]: I1203 13:52:36.887167 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:37.028080 master-0 kubenswrapper[4808]: I1203 13:52:37.027984 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxxc\" (UniqueName: \"kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc\") pod \"5eae43c1-ef3e-4175-8f95-220e490e3017\" (UID: \"5eae43c1-ef3e-4175-8f95-220e490e3017\") " Dec 03 13:52:37.032742 master-0 kubenswrapper[4808]: I1203 13:52:37.032617 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc" (OuterVolumeSpecName: "kube-api-access-rxxxc") pod "5eae43c1-ef3e-4175-8f95-220e490e3017" (UID: "5eae43c1-ef3e-4175-8f95-220e490e3017"). InnerVolumeSpecName "kube-api-access-rxxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:52:37.129634 master-0 kubenswrapper[4808]: I1203 13:52:37.129340 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxxc\" (UniqueName: \"kubernetes.io/projected/5eae43c1-ef3e-4175-8f95-220e490e3017-kube-api-access-rxxxc\") on node \"master-0\" DevicePath \"\"" Dec 03 13:52:37.871078 master-0 kubenswrapper[4808]: I1203 13:52:37.870998 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-jqvnb" event={"ID":"5eae43c1-ef3e-4175-8f95-220e490e3017","Type":"ContainerDied","Data":"3cd671840d59b133f88fb03765cdb68615a01b375fa5cbcc45c53662d0aad8d5"} Dec 03 13:52:37.871078 master-0 kubenswrapper[4808]: I1203 13:52:37.871086 4808 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd671840d59b133f88fb03765cdb68615a01b375fa5cbcc45c53662d0aad8d5" Dec 03 13:52:37.871546 master-0 kubenswrapper[4808]: I1203 13:52:37.871529 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-jqvnb" Dec 03 13:52:38.301034 master-0 kubenswrapper[4808]: I1203 13:52:38.300812 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-jqvnb"] Dec 03 13:52:38.756043 master-0 kubenswrapper[4808]: I1203 13:52:38.755937 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-jqvnb"] Dec 03 13:52:39.009840 master-0 kubenswrapper[4808]: I1203 13:52:39.009613 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" path="/var/lib/kubelet/pods/5eae43c1-ef3e-4175-8f95-220e490e3017/volumes" Dec 03 13:52:47.124872 master-0 kubenswrapper[4808]: I1203 13:52:47.124712 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Dec 03 13:52:48.077095 master-0 kubenswrapper[4808]: I1203 13:52:48.077017 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-kk4tm"] Dec 03 13:52:48.077635 master-0 kubenswrapper[4808]: E1203 13:52:48.077612 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:52:48.077739 master-0 kubenswrapper[4808]: I1203 13:52:48.077724 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:52:48.077866 master-0 kubenswrapper[4808]: I1203 13:52:48.077850 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:52:48.078355 master-0 kubenswrapper[4808]: I1203 13:52:48.078326 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.081231 master-0 kubenswrapper[4808]: I1203 13:52:48.081168 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 13:52:48.083153 master-0 kubenswrapper[4808]: I1203 13:52:48.083108 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 13:52:48.083471 master-0 kubenswrapper[4808]: I1203 13:52:48.083435 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 13:52:48.083668 master-0 kubenswrapper[4808]: I1203 13:52:48.083639 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 13:52:48.199518 master-0 kubenswrapper[4808]: I1203 13:52:48.199427 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.199398642 podStartE2EDuration="1.199398642s" podCreationTimestamp="2025-12-03 13:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:52:48.174630901 +0000 UTC m=+117.614928836" watchObservedRunningTime="2025-12-03 13:52:48.199398642 +0000 UTC m=+117.639696587" Dec 03 13:52:48.201161 master-0 kubenswrapper[4808]: I1203 13:52:48.201083 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-42hmk"] Dec 03 13:52:48.202066 master-0 kubenswrapper[4808]: I1203 13:52:48.202034 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.204593 master-0 kubenswrapper[4808]: I1203 13:52:48.204552 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 13:52:48.204689 master-0 kubenswrapper[4808]: I1203 13:52:48.204595 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 13:52:48.266848 master-0 kubenswrapper[4808]: I1203 13:52:48.266753 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.266848 master-0 kubenswrapper[4808]: I1203 13:52:48.266828 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.266848 master-0 kubenswrapper[4808]: I1203 13:52:48.266857 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.266889 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.266911 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.266936 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.266962 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.266992 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267016 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267036 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267058 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267080 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267104 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267127 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267150 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267174 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267220 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267277 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.267307 master-0 kubenswrapper[4808]: I1203 13:52:48.267307 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.268037 master-0 kubenswrapper[4808]: I1203 13:52:48.267344 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.268037 master-0 kubenswrapper[4808]: I1203 13:52:48.267388 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.268037 master-0 kubenswrapper[4808]: I1203 13:52:48.267418 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.268037 master-0 kubenswrapper[4808]: I1203 13:52:48.267436 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.268037 master-0 kubenswrapper[4808]: I1203 13:52:48.267451 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367783 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367849 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367876 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367899 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367925 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367946 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.367970 master-0 kubenswrapper[4808]: I1203 13:52:48.367967 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.367994 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368005 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368068 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368086 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368113 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368137 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368195 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368222 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368250 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368294 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368305 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368348 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.368505 master-0 kubenswrapper[4808]: I1203 13:52:48.368468 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369054 master-0 kubenswrapper[4808]: I1203 13:52:48.368560 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.369054 master-0 kubenswrapper[4808]: I1203 13:52:48.368225 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369054 master-0 kubenswrapper[4808]: I1203 13:52:48.368835 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369387 master-0 kubenswrapper[4808]: I1203 13:52:48.369337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369472 master-0 kubenswrapper[4808]: I1203 13:52:48.369237 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369560 master-0 kubenswrapper[4808]: I1203 13:52:48.369532 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369613 master-0 kubenswrapper[4808]: I1203 13:52:48.369508 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.369613 master-0 kubenswrapper[4808]: I1203 13:52:48.369478 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.369721 master-0 kubenswrapper[4808]: I1203 13:52:48.369702 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.369923 master-0 kubenswrapper[4808]: I1203 13:52:48.369877 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370028 master-0 kubenswrapper[4808]: I1203 13:52:48.370010 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370232 master-0 kubenswrapper[4808]: I1203 13:52:48.370212 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370392 master-0 kubenswrapper[4808]: I1203 13:52:48.370348 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370468 master-0 kubenswrapper[4808]: I1203 13:52:48.369972 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.370468 master-0 kubenswrapper[4808]: I1203 13:52:48.370069 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370552 master-0 kubenswrapper[4808]: I1203 13:52:48.370232 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.370552 master-0 kubenswrapper[4808]: I1203 13:52:48.369709 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.370552 master-0 kubenswrapper[4808]: I1203 13:52:48.370326 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370552 master-0 kubenswrapper[4808]: I1203 13:52:48.370358 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.370738 master-0 kubenswrapper[4808]: I1203 13:52:48.370606 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370738 master-0 kubenswrapper[4808]: I1203 13:52:48.370647 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370738 master-0 kubenswrapper[4808]: I1203 13:52:48.370693 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370738 master-0 kubenswrapper[4808]: I1203 13:52:48.370717 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.370969 master-0 kubenswrapper[4808]: I1203 13:52:48.370096 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.371069 master-0 kubenswrapper[4808]: I1203 13:52:48.371015 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.371069 master-0 kubenswrapper[4808]: I1203 13:52:48.371047 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.371185 master-0 kubenswrapper[4808]: I1203 13:52:48.370939 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.371479 master-0 kubenswrapper[4808]: I1203 13:52:48.371426 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.391629 master-0 kubenswrapper[4808]: I1203 13:52:48.391555 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.392690 master-0 kubenswrapper[4808]: I1203 13:52:48.392631 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.407865 master-0 kubenswrapper[4808]: I1203 13:52:48.407723 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 13:52:48.472115 master-0 kubenswrapper[4808]: I1203 13:52:48.472040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:52:48.472341 master-0 kubenswrapper[4808]: E1203 13:52:48.472279 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:48.472423 master-0 kubenswrapper[4808]: E1203 13:52:48.472358 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:53:52.472331502 +0000 UTC m=+181.912629437 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:52:48.514708 master-0 kubenswrapper[4808]: I1203 13:52:48.514611 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:52:48.527102 master-0 kubenswrapper[4808]: W1203 13:52:48.527027 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c2a40b_213c_42f1_9459_87c2e780a75f.slice/crio-d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3 WatchSource:0}: Error finding container d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3: Status 404 returned error can't find the container with id d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3 Dec 03 13:52:48.903286 master-0 kubenswrapper[4808]: I1203 13:52:48.903210 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773"} Dec 03 13:52:48.904442 master-0 kubenswrapper[4808]: I1203 13:52:48.904404 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3"} Dec 03 13:52:48.944700 master-0 kubenswrapper[4808]: I1203 13:52:48.944631 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ch7xd"] Dec 03 13:52:48.945133 master-0 kubenswrapper[4808]: I1203 13:52:48.945094 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:48.945302 master-0 kubenswrapper[4808]: E1203 13:52:48.945220 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:52:48.975958 master-0 kubenswrapper[4808]: I1203 13:52:48.975883 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:48.975958 master-0 kubenswrapper[4808]: I1203 13:52:48.975939 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:49.076442 master-0 kubenswrapper[4808]: I1203 13:52:49.076365 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:49.076442 master-0 kubenswrapper[4808]: I1203 13:52:49.076441 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:49.076736 master-0 kubenswrapper[4808]: E1203 13:52:49.076596 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:49.076736 master-0 kubenswrapper[4808]: E1203 13:52:49.076671 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:49.576649174 +0000 UTC m=+119.016947109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:49.097288 master-0 kubenswrapper[4808]: I1203 13:52:49.097122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:49.581027 master-0 kubenswrapper[4808]: I1203 13:52:49.580859 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:49.581842 master-0 kubenswrapper[4808]: E1203 13:52:49.581176 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:49.581842 master-0 kubenswrapper[4808]: E1203 13:52:49.581367 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:50.581337447 +0000 UTC m=+120.021635572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:50.021583 master-0 kubenswrapper[4808]: I1203 13:52:50.021420 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Dec 03 13:52:50.620701 master-0 kubenswrapper[4808]: I1203 13:52:50.620595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:50.621400 master-0 kubenswrapper[4808]: E1203 13:52:50.620990 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:50.621400 master-0 kubenswrapper[4808]: E1203 13:52:50.621132 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:52.621066556 +0000 UTC m=+122.061364491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:50.929174 master-0 kubenswrapper[4808]: E1203 13:52:50.928555 4808 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 03 13:52:50.977156 master-0 kubenswrapper[4808]: E1203 13:52:50.977079 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:52:51.005901 master-0 kubenswrapper[4808]: I1203 13:52:51.005797 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:51.006235 master-0 kubenswrapper[4808]: E1203 13:52:51.006104 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:52:51.023329 master-0 kubenswrapper[4808]: I1203 13:52:51.023158 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.023131454 podStartE2EDuration="1.023131454s" podCreationTimestamp="2025-12-03 13:52:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:52:51.022424473 +0000 UTC m=+120.462722438" watchObservedRunningTime="2025-12-03 13:52:51.023131454 +0000 UTC m=+120.463429389" Dec 03 13:52:52.011972 master-0 kubenswrapper[4808]: I1203 13:52:52.011359 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"8508b9103a62149e40a9f8b253309fee2580cb816ac86bfe2d7376f7c71e976c"} Dec 03 13:52:52.011972 master-0 kubenswrapper[4808]: I1203 13:52:52.011375 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="8508b9103a62149e40a9f8b253309fee2580cb816ac86bfe2d7376f7c71e976c" exitCode=0 Dec 03 13:52:52.709633 master-0 kubenswrapper[4808]: I1203 13:52:52.709522 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:52.710085 master-0 kubenswrapper[4808]: E1203 13:52:52.709820 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:52.710085 master-0 kubenswrapper[4808]: E1203 13:52:52.709944 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:52:56.709923447 +0000 UTC m=+126.150221382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:53.005482 master-0 kubenswrapper[4808]: I1203 13:52:53.005319 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:53.005778 master-0 kubenswrapper[4808]: E1203 13:52:53.005529 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:52:55.006520 master-0 kubenswrapper[4808]: I1203 13:52:55.006435 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:55.007245 master-0 kubenswrapper[4808]: E1203 13:52:55.006772 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:52:55.978548 master-0 kubenswrapper[4808]: E1203 13:52:55.978402 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:52:56.738519 master-0 kubenswrapper[4808]: I1203 13:52:56.738426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:56.739301 master-0 kubenswrapper[4808]: E1203 13:52:56.738798 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:56.739301 master-0 kubenswrapper[4808]: E1203 13:52:56.738976 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:53:04.738954414 +0000 UTC m=+134.179252349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:52:57.005889 master-0 kubenswrapper[4808]: I1203 13:52:57.005711 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:57.006192 master-0 kubenswrapper[4808]: E1203 13:52:57.005898 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:52:59.005653 master-0 kubenswrapper[4808]: I1203 13:52:59.005523 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:52:59.006823 master-0 kubenswrapper[4808]: E1203 13:52:59.005819 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:00.979214 master-0 kubenswrapper[4808]: E1203 13:53:00.979113 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:01.005714 master-0 kubenswrapper[4808]: I1203 13:53:01.005572 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:01.011285 master-0 kubenswrapper[4808]: E1203 13:53:01.011132 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:01.638642 master-0 kubenswrapper[4808]: I1203 13:53:01.638565 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg"] Dec 03 13:53:01.639044 master-0 kubenswrapper[4808]: I1203 13:53:01.639014 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.642308 master-0 kubenswrapper[4808]: I1203 13:53:01.642209 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 13:53:01.642391 master-0 kubenswrapper[4808]: I1203 13:53:01.642317 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 13:53:01.642391 master-0 kubenswrapper[4808]: I1203 13:53:01.642320 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 13:53:01.642472 master-0 kubenswrapper[4808]: I1203 13:53:01.642275 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 13:53:01.644983 master-0 kubenswrapper[4808]: I1203 13:53:01.644952 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 13:53:01.778016 master-0 kubenswrapper[4808]: I1203 13:53:01.777819 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.778016 master-0 kubenswrapper[4808]: I1203 13:53:01.777910 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.778016 master-0 kubenswrapper[4808]: I1203 13:53:01.777935 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.778016 master-0 kubenswrapper[4808]: I1203 13:53:01.778014 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.878707 master-0 kubenswrapper[4808]: I1203 13:53:01.878551 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.879210 master-0 kubenswrapper[4808]: I1203 13:53:01.878740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.879210 master-0 kubenswrapper[4808]: I1203 13:53:01.878775 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.879210 master-0 kubenswrapper[4808]: I1203 13:53:01.878815 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.879939 master-0 kubenswrapper[4808]: I1203 13:53:01.879871 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.879939 master-0 kubenswrapper[4808]: I1203 13:53:01.879910 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:01.883419 master-0 kubenswrapper[4808]: I1203 13:53:01.883351 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:02.458061 master-0 kubenswrapper[4808]: I1203 13:53:02.457829 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5stk"] Dec 03 13:53:02.458929 master-0 kubenswrapper[4808]: I1203 13:53:02.458671 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.461073 master-0 kubenswrapper[4808]: I1203 13:53:02.460986 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 13:53:02.461640 master-0 kubenswrapper[4808]: I1203 13:53:02.461586 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 13:53:02.584894 master-0 kubenswrapper[4808]: I1203 13:53:02.584755 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.584894 master-0 kubenswrapper[4808]: I1203 13:53:02.584879 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.584894 master-0 kubenswrapper[4808]: I1203 13:53:02.584907 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.584936 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585033 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585089 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585110 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585149 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585196 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585324 master-0 kubenswrapper[4808]: I1203 13:53:02.585297 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585585 master-0 kubenswrapper[4808]: I1203 13:53:02.585359 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585585 master-0 kubenswrapper[4808]: I1203 13:53:02.585438 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585585 master-0 kubenswrapper[4808]: I1203 13:53:02.585472 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585585 master-0 kubenswrapper[4808]: I1203 13:53:02.585520 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585585 master-0 kubenswrapper[4808]: I1203 13:53:02.585545 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585778 master-0 kubenswrapper[4808]: I1203 13:53:02.585697 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585824 master-0 kubenswrapper[4808]: I1203 13:53:02.585785 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585869 master-0 kubenswrapper[4808]: I1203 13:53:02.585825 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2m6s\" (UniqueName: \"kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585968 master-0 kubenswrapper[4808]: I1203 13:53:02.585878 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.585968 master-0 kubenswrapper[4808]: I1203 13:53:02.585918 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686427 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686493 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686519 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686535 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686552 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.686528 master-0 kubenswrapper[4808]: I1203 13:53:02.686576 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686603 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686624 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2m6s\" (UniqueName: \"kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686644 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686665 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686687 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686816 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686847 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686873 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686858 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686883 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686989 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.686925 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.687013 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.687072 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.687516 master-0 kubenswrapper[4808]: I1203 13:53:02.687122 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687150 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687205 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687210 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687229 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687280 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687313 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687364 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687374 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687387 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687412 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.687656 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.688707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.688848 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.689413 master-0 kubenswrapper[4808]: I1203 13:53:02.688890 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.692910 master-0 kubenswrapper[4808]: I1203 13:53:02.692822 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.996214 master-0 kubenswrapper[4808]: I1203 13:53:02.996149 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2m6s\" (UniqueName: \"kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s\") pod \"ovnkube-node-m5stk\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:02.999011 master-0 kubenswrapper[4808]: I1203 13:53:02.998970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:03.007822 master-0 kubenswrapper[4808]: I1203 13:53:03.007780 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:03.007950 master-0 kubenswrapper[4808]: E1203 13:53:03.007910 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:03.072426 master-0 kubenswrapper[4808]: I1203 13:53:03.072318 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:03.162023 master-0 kubenswrapper[4808]: I1203 13:53:03.161843 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:53:04.191248 master-0 kubenswrapper[4808]: I1203 13:53:04.181420 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5"} Dec 03 13:53:04.191248 master-0 kubenswrapper[4808]: I1203 13:53:04.184746 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"8ad5c6f42b048a7109c8dc8e6505bc10434b731d09b1f639a47358834f2d9395"} Dec 03 13:53:04.783695 master-0 kubenswrapper[4808]: I1203 13:53:04.783380 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:04.783695 master-0 kubenswrapper[4808]: E1203 13:53:04.783592 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:04.783695 master-0 kubenswrapper[4808]: E1203 13:53:04.783662 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:53:20.783644991 +0000 UTC m=+150.223942926 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:05.005788 master-0 kubenswrapper[4808]: I1203 13:53:05.005507 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:05.005788 master-0 kubenswrapper[4808]: E1203 13:53:05.005689 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:05.980647 master-0 kubenswrapper[4808]: E1203 13:53:05.980576 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:07.005620 master-0 kubenswrapper[4808]: I1203 13:53:07.005498 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:07.006338 master-0 kubenswrapper[4808]: E1203 13:53:07.005763 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:09.005662 master-0 kubenswrapper[4808]: I1203 13:53:09.005588 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:09.006387 master-0 kubenswrapper[4808]: E1203 13:53:09.005797 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:10.981670 master-0 kubenswrapper[4808]: E1203 13:53:10.981494 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:11.005704 master-0 kubenswrapper[4808]: I1203 13:53:11.005617 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:11.007799 master-0 kubenswrapper[4808]: E1203 13:53:11.007669 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:11.208741 master-0 kubenswrapper[4808]: I1203 13:53:11.208572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"a6cef233e6c629ac6fba57da009a22816a29742255beeb15a48e7b7b48c9e536"} Dec 03 13:53:12.215282 master-0 kubenswrapper[4808]: I1203 13:53:12.215063 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e"} Dec 03 13:53:12.216625 master-0 kubenswrapper[4808]: I1203 13:53:12.216560 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"d91c321c57f44509a53798341a72ef3d6374d56a8925ee7e904aa8675b73f42d"} Dec 03 13:53:13.005964 master-0 kubenswrapper[4808]: I1203 13:53:13.005832 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:13.006425 master-0 kubenswrapper[4808]: E1203 13:53:13.006149 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:15.006410 master-0 kubenswrapper[4808]: I1203 13:53:15.005035 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:15.006410 master-0 kubenswrapper[4808]: E1203 13:53:15.005459 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:15.229457 master-0 kubenswrapper[4808]: I1203 13:53:15.229314 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"a6cef233e6c629ac6fba57da009a22816a29742255beeb15a48e7b7b48c9e536"} Dec 03 13:53:15.229457 master-0 kubenswrapper[4808]: I1203 13:53:15.229356 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a6cef233e6c629ac6fba57da009a22816a29742255beeb15a48e7b7b48c9e536" exitCode=0 Dec 03 13:53:15.983605 master-0 kubenswrapper[4808]: E1203 13:53:15.983509 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:17.005067 master-0 kubenswrapper[4808]: I1203 13:53:17.004949 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:17.005984 master-0 kubenswrapper[4808]: E1203 13:53:17.005169 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:18.239785 master-0 kubenswrapper[4808]: I1203 13:53:18.239692 4808 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="51d215fc84560f1f6ad187305809ecedf73402cb7d8d1d69a0d33aa56e548bef" exitCode=1 Dec 03 13:53:18.239785 master-0 kubenswrapper[4808]: I1203 13:53:18.239771 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"51d215fc84560f1f6ad187305809ecedf73402cb7d8d1d69a0d33aa56e548bef"} Dec 03 13:53:18.240517 master-0 kubenswrapper[4808]: I1203 13:53:18.240478 4808 scope.go:117] "RemoveContainer" containerID="51d215fc84560f1f6ad187305809ecedf73402cb7d8d1d69a0d33aa56e548bef" Dec 03 13:53:19.005975 master-0 kubenswrapper[4808]: I1203 13:53:19.005831 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:19.006327 master-0 kubenswrapper[4808]: E1203 13:53:19.005988 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:19.273245 master-0 kubenswrapper[4808]: I1203 13:53:19.273060 4808 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" exitCode=1 Dec 03 13:53:19.273245 master-0 kubenswrapper[4808]: I1203 13:53:19.273126 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b"} Dec 03 13:53:19.273245 master-0 kubenswrapper[4808]: I1203 13:53:19.273211 4808 scope.go:117] "RemoveContainer" containerID="928d610ce063d48a64cbc885d60fb997c89243d56cfbef517a5cbef004ed9c17" Dec 03 13:53:19.273991 master-0 kubenswrapper[4808]: I1203 13:53:19.273805 4808 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:53:19.274093 master-0 kubenswrapper[4808]: E1203 13:53:19.274070 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 13:53:19.914690 master-0 kubenswrapper[4808]: I1203 13:53:19.914556 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-kk4tm" podStartSLOduration=16.163258934 podStartE2EDuration="31.914477687s" podCreationTimestamp="2025-12-03 13:52:48 +0000 UTC" firstStartedPulling="2025-12-03 13:52:48.432310008 +0000 UTC m=+117.872607943" lastFinishedPulling="2025-12-03 13:53:04.183528761 +0000 UTC m=+133.623826696" observedRunningTime="2025-12-03 13:53:19.913673193 +0000 UTC m=+149.353971158" watchObservedRunningTime="2025-12-03 13:53:19.914477687 +0000 UTC m=+149.354775612" Dec 03 13:53:20.820319 master-0 kubenswrapper[4808]: I1203 13:53:20.820063 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:20.821481 master-0 kubenswrapper[4808]: E1203 13:53:20.820394 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:20.821481 master-0 kubenswrapper[4808]: E1203 13:53:20.820546 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:53:52.820508685 +0000 UTC m=+182.260806660 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:20.984065 master-0 kubenswrapper[4808]: E1203 13:53:20.983960 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:21.005677 master-0 kubenswrapper[4808]: I1203 13:53:21.005596 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:21.005918 master-0 kubenswrapper[4808]: E1203 13:53:21.005761 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:21.284429 master-0 kubenswrapper[4808]: I1203 13:53:21.283571 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2"} Dec 03 13:53:21.431947 master-0 kubenswrapper[4808]: I1203 13:53:21.431863 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:21.432785 master-0 kubenswrapper[4808]: I1203 13:53:21.432514 4808 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:53:21.432785 master-0 kubenswrapper[4808]: E1203 13:53:21.432713 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 13:53:23.005238 master-0 kubenswrapper[4808]: I1203 13:53:23.005143 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:23.006647 master-0 kubenswrapper[4808]: E1203 13:53:23.005332 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:25.005841 master-0 kubenswrapper[4808]: I1203 13:53:25.005736 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:25.006815 master-0 kubenswrapper[4808]: E1203 13:53:25.006049 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:25.223634 master-0 kubenswrapper[4808]: I1203 13:53:25.223558 4808 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:25.224136 master-0 kubenswrapper[4808]: I1203 13:53:25.224102 4808 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:53:25.224425 master-0 kubenswrapper[4808]: E1203 13:53:25.224374 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 13:53:25.986037 master-0 kubenswrapper[4808]: E1203 13:53:25.985928 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:27.005065 master-0 kubenswrapper[4808]: I1203 13:53:27.004970 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:27.005857 master-0 kubenswrapper[4808]: E1203 13:53:27.005128 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:28.059909 master-0 kubenswrapper[4808]: I1203 13:53:28.059812 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:28.060600 master-0 kubenswrapper[4808]: I1203 13:53:28.060509 4808 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:53:28.060769 master-0 kubenswrapper[4808]: E1203 13:53:28.060739 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 13:53:29.006093 master-0 kubenswrapper[4808]: I1203 13:53:29.005905 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:29.006093 master-0 kubenswrapper[4808]: E1203 13:53:29.006108 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:30.986911 master-0 kubenswrapper[4808]: E1203 13:53:30.986815 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:31.008902 master-0 kubenswrapper[4808]: I1203 13:53:31.008839 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:31.009198 master-0 kubenswrapper[4808]: E1203 13:53:31.008953 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:33.005605 master-0 kubenswrapper[4808]: I1203 13:53:33.005535 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:33.006330 master-0 kubenswrapper[4808]: E1203 13:53:33.005701 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:33.325745 master-0 kubenswrapper[4808]: I1203 13:53:33.325651 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="54eb7436f6ac8799b7f10cde49a492e33995d42df0890008db66fbf955cc9e20" exitCode=0 Dec 03 13:53:33.326012 master-0 kubenswrapper[4808]: I1203 13:53:33.325770 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"54eb7436f6ac8799b7f10cde49a492e33995d42df0890008db66fbf955cc9e20"} Dec 03 13:53:33.328050 master-0 kubenswrapper[4808]: I1203 13:53:33.327982 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"3a78cb4cf7263b7ff727b4984d0ab0b818adeb40c92408bba87e262eea4f142e"} Dec 03 13:53:33.332224 master-0 kubenswrapper[4808]: I1203 13:53:33.331979 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" exitCode=0 Dec 03 13:53:33.332224 master-0 kubenswrapper[4808]: I1203 13:53:33.332039 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8"} Dec 03 13:53:33.402758 master-0 kubenswrapper[4808]: I1203 13:53:33.402653 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" podStartSLOduration=15.003347777 podStartE2EDuration="33.402625729s" podCreationTimestamp="2025-12-03 13:53:00 +0000 UTC" firstStartedPulling="2025-12-03 13:53:14.728799719 +0000 UTC m=+144.169097654" lastFinishedPulling="2025-12-03 13:53:33.128077671 +0000 UTC m=+162.568375606" observedRunningTime="2025-12-03 13:53:33.370146794 +0000 UTC m=+162.810444749" watchObservedRunningTime="2025-12-03 13:53:33.402625729 +0000 UTC m=+162.842923664" Dec 03 13:53:34.339698 master-0 kubenswrapper[4808]: I1203 13:53:34.339342 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f"} Dec 03 13:53:34.339698 master-0 kubenswrapper[4808]: I1203 13:53:34.339695 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4"} Dec 03 13:53:34.339698 master-0 kubenswrapper[4808]: I1203 13:53:34.339710 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0"} Dec 03 13:53:34.339698 master-0 kubenswrapper[4808]: I1203 13:53:34.339719 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228"} Dec 03 13:53:34.339698 master-0 kubenswrapper[4808]: I1203 13:53:34.339728 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561"} Dec 03 13:53:34.340786 master-0 kubenswrapper[4808]: I1203 13:53:34.339739 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a"} Dec 03 13:53:35.006235 master-0 kubenswrapper[4808]: I1203 13:53:35.005745 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:35.006235 master-0 kubenswrapper[4808]: E1203 13:53:35.006178 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:35.585923 master-0 kubenswrapper[4808]: I1203 13:53:35.585775 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="26079c56109d1215373542cb7279aa79197f8d276b87f23f84c5d431dd38bc3f" exitCode=0 Dec 03 13:53:35.585923 master-0 kubenswrapper[4808]: I1203 13:53:35.585868 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"26079c56109d1215373542cb7279aa79197f8d276b87f23f84c5d431dd38bc3f"} Dec 03 13:53:35.988629 master-0 kubenswrapper[4808]: E1203 13:53:35.988417 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:36.592914 master-0 kubenswrapper[4808]: I1203 13:53:36.592829 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216"} Dec 03 13:53:37.005603 master-0 kubenswrapper[4808]: I1203 13:53:37.005421 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:37.005787 master-0 kubenswrapper[4808]: E1203 13:53:37.005612 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:39.008340 master-0 kubenswrapper[4808]: I1203 13:53:39.008276 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:39.009004 master-0 kubenswrapper[4808]: E1203 13:53:39.008541 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:40.005830 master-0 kubenswrapper[4808]: I1203 13:53:40.005746 4808 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:53:40.989127 master-0 kubenswrapper[4808]: E1203 13:53:40.989040 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:41.005553 master-0 kubenswrapper[4808]: I1203 13:53:41.005468 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:41.006303 master-0 kubenswrapper[4808]: E1203 13:53:41.006231 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:41.733412 master-0 kubenswrapper[4808]: I1203 13:53:41.732944 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d"} Dec 03 13:53:41.741431 master-0 kubenswrapper[4808]: I1203 13:53:41.741364 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerStarted","Data":"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191"} Dec 03 13:53:41.741825 master-0 kubenswrapper[4808]: I1203 13:53:41.741785 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:41.741825 master-0 kubenswrapper[4808]: I1203 13:53:41.741819 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:41.767762 master-0 kubenswrapper[4808]: I1203 13:53:41.767690 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:42.744601 master-0 kubenswrapper[4808]: I1203 13:53:42.744465 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:42.771761 master-0 kubenswrapper[4808]: I1203 13:53:42.771670 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:53:43.005355 master-0 kubenswrapper[4808]: I1203 13:53:43.005043 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:43.005355 master-0 kubenswrapper[4808]: E1203 13:53:43.005270 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:45.005101 master-0 kubenswrapper[4808]: I1203 13:53:45.004981 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:45.006489 master-0 kubenswrapper[4808]: E1203 13:53:45.005147 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:45.990750 master-0 kubenswrapper[4808]: E1203 13:53:45.990673 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:47.005551 master-0 kubenswrapper[4808]: I1203 13:53:47.005445 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:47.006226 master-0 kubenswrapper[4808]: E1203 13:53:47.005621 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:47.414020 master-0 kubenswrapper[4808]: I1203 13:53:47.413873 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podStartSLOduration=18.569417983 podStartE2EDuration="47.41382394s" podCreationTimestamp="2025-12-03 13:53:00 +0000 UTC" firstStartedPulling="2025-12-03 13:53:04.184118648 +0000 UTC m=+133.624416583" lastFinishedPulling="2025-12-03 13:53:33.028524595 +0000 UTC m=+162.468822540" observedRunningTime="2025-12-03 13:53:47.413418118 +0000 UTC m=+176.853716073" watchObservedRunningTime="2025-12-03 13:53:47.41382394 +0000 UTC m=+176.854121865" Dec 03 13:53:47.621366 master-0 kubenswrapper[4808]: E1203 13:53:47.621239 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" Dec 03 13:53:47.763841 master-0 kubenswrapper[4808]: I1203 13:53:47.763752 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"f9b2a45b3882aa4aab7d621861c3b576125dca392eda394a42bdbf272c5861e2"} Dec 03 13:53:48.060172 master-0 kubenswrapper[4808]: I1203 13:53:48.060008 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:48.064557 master-0 kubenswrapper[4808]: I1203 13:53:48.064510 4808 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:48.769967 master-0 kubenswrapper[4808]: I1203 13:53:48.769852 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"f9b2a45b3882aa4aab7d621861c3b576125dca392eda394a42bdbf272c5861e2"} Dec 03 13:53:48.770439 master-0 kubenswrapper[4808]: I1203 13:53:48.769788 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="f9b2a45b3882aa4aab7d621861c3b576125dca392eda394a42bdbf272c5861e2" exitCode=0 Dec 03 13:53:48.770773 master-0 kubenswrapper[4808]: I1203 13:53:48.770688 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:49.005230 master-0 kubenswrapper[4808]: I1203 13:53:49.005177 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:49.005413 master-0 kubenswrapper[4808]: E1203 13:53:49.005373 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:49.777904 master-0 kubenswrapper[4808]: I1203 13:53:49.777833 4808 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a71ca428dfafcfdc36c094ec10a4f26a0955b62eee12c5643b197e7b67fda68a" exitCode=0 Dec 03 13:53:49.778735 master-0 kubenswrapper[4808]: I1203 13:53:49.777930 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"a71ca428dfafcfdc36c094ec10a4f26a0955b62eee12c5643b197e7b67fda68a"} Dec 03 13:53:50.787089 master-0 kubenswrapper[4808]: I1203 13:53:50.786748 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"faaade0e087b9881354c66070961350c192c84a21bcde13985c46c7344e4fb17"} Dec 03 13:53:50.810848 master-0 kubenswrapper[4808]: I1203 13:53:50.810658 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-42hmk" podStartSLOduration=3.841729568 podStartE2EDuration="1m2.810595573s" podCreationTimestamp="2025-12-03 13:52:48 +0000 UTC" firstStartedPulling="2025-12-03 13:52:48.529461784 +0000 UTC m=+117.969759719" lastFinishedPulling="2025-12-03 13:53:47.498327799 +0000 UTC m=+176.938625724" observedRunningTime="2025-12-03 13:53:50.809269755 +0000 UTC m=+180.249567690" watchObservedRunningTime="2025-12-03 13:53:50.810595573 +0000 UTC m=+180.250893508" Dec 03 13:53:50.992083 master-0 kubenswrapper[4808]: E1203 13:53:50.991982 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:51.005513 master-0 kubenswrapper[4808]: I1203 13:53:51.005407 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:51.006340 master-0 kubenswrapper[4808]: E1203 13:53:51.006253 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:51.436730 master-0 kubenswrapper[4808]: I1203 13:53:51.436618 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:53:52.529030 master-0 kubenswrapper[4808]: I1203 13:53:52.528953 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:53:52.529880 master-0 kubenswrapper[4808]: E1203 13:53:52.529203 4808 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:53:52.529880 master-0 kubenswrapper[4808]: E1203 13:53:52.529331 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:54.529306046 +0000 UTC m=+303.969603981 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:53:52.831865 master-0 kubenswrapper[4808]: I1203 13:53:52.831614 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:52.832204 master-0 kubenswrapper[4808]: E1203 13:53:52.831906 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:52.832204 master-0 kubenswrapper[4808]: E1203 13:53:52.832066 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:56.832035413 +0000 UTC m=+246.272333388 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 13:53:53.005858 master-0 kubenswrapper[4808]: I1203 13:53:53.005785 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:53.006108 master-0 kubenswrapper[4808]: E1203 13:53:53.005970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:55.006070 master-0 kubenswrapper[4808]: I1203 13:53:55.005985 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:55.006784 master-0 kubenswrapper[4808]: E1203 13:53:55.006254 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:55.993790 master-0 kubenswrapper[4808]: E1203 13:53:55.993717 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:53:57.005620 master-0 kubenswrapper[4808]: I1203 13:53:57.005505 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:57.006321 master-0 kubenswrapper[4808]: E1203 13:53:57.005706 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:53:57.810531 master-0 kubenswrapper[4808]: I1203 13:53:57.810427 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kk4tm_c777c9de-1ace-46be-b5c2-c71d252f53f4/kube-multus/0.log" Dec 03 13:53:57.810531 master-0 kubenswrapper[4808]: I1203 13:53:57.810521 4808 generic.go:334] "Generic (PLEG): container finished" podID="c777c9de-1ace-46be-b5c2-c71d252f53f4" containerID="eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e" exitCode=1 Dec 03 13:53:57.810897 master-0 kubenswrapper[4808]: I1203 13:53:57.810580 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerDied","Data":"eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e"} Dec 03 13:53:57.811215 master-0 kubenswrapper[4808]: I1203 13:53:57.811179 4808 scope.go:117] "RemoveContainer" containerID="eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e" Dec 03 13:53:58.815644 master-0 kubenswrapper[4808]: I1203 13:53:58.815599 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kk4tm_c777c9de-1ace-46be-b5c2-c71d252f53f4/kube-multus/0.log" Dec 03 13:53:58.816311 master-0 kubenswrapper[4808]: I1203 13:53:58.815676 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"775dcd93893820b4c21bdc8fe2a71fe4083f1b2116fec71cd51c334826379d59"} Dec 03 13:53:59.005922 master-0 kubenswrapper[4808]: I1203 13:53:59.005846 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:53:59.006316 master-0 kubenswrapper[4808]: E1203 13:53:59.006039 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:00.996195 master-0 kubenswrapper[4808]: E1203 13:54:00.995835 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:01.005475 master-0 kubenswrapper[4808]: I1203 13:54:01.005385 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:54:01.005825 master-0 kubenswrapper[4808]: I1203 13:54:01.005485 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:01.006400 master-0 kubenswrapper[4808]: E1203 13:54:01.006358 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:03.005445 master-0 kubenswrapper[4808]: I1203 13:54:03.005372 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:03.006054 master-0 kubenswrapper[4808]: E1203 13:54:03.005541 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:03.089883 master-0 kubenswrapper[4808]: I1203 13:54:03.089412 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" probeResult="failure" output="" Dec 03 13:54:05.005835 master-0 kubenswrapper[4808]: I1203 13:54:05.005711 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:05.006818 master-0 kubenswrapper[4808]: E1203 13:54:05.005978 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:05.997319 master-0 kubenswrapper[4808]: E1203 13:54:05.997148 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:07.005823 master-0 kubenswrapper[4808]: I1203 13:54:07.005728 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:07.006777 master-0 kubenswrapper[4808]: E1203 13:54:07.005963 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:09.006037 master-0 kubenswrapper[4808]: I1203 13:54:09.005904 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:09.007065 master-0 kubenswrapper[4808]: E1203 13:54:09.006212 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:11.005143 master-0 kubenswrapper[4808]: I1203 13:54:11.004926 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:11.005143 master-0 kubenswrapper[4808]: E1203 13:54:11.004912 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:11.006025 master-0 kubenswrapper[4808]: E1203 13:54:11.005597 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:13.005972 master-0 kubenswrapper[4808]: I1203 13:54:13.005817 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:13.006882 master-0 kubenswrapper[4808]: E1203 13:54:13.006057 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:15.005696 master-0 kubenswrapper[4808]: I1203 13:54:15.005615 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:15.006390 master-0 kubenswrapper[4808]: E1203 13:54:15.005970 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:16.006159 master-0 kubenswrapper[4808]: E1203 13:54:16.006072 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:16.872595 master-0 kubenswrapper[4808]: I1203 13:54:16.872527 4808 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2" exitCode=1 Dec 03 13:54:16.872595 master-0 kubenswrapper[4808]: I1203 13:54:16.872590 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2"} Dec 03 13:54:16.872905 master-0 kubenswrapper[4808]: I1203 13:54:16.872645 4808 scope.go:117] "RemoveContainer" containerID="51d215fc84560f1f6ad187305809ecedf73402cb7d8d1d69a0d33aa56e548bef" Dec 03 13:54:16.873376 master-0 kubenswrapper[4808]: I1203 13:54:16.873171 4808 scope.go:117] "RemoveContainer" containerID="9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2" Dec 03 13:54:16.873439 master-0 kubenswrapper[4808]: E1203 13:54:16.873414 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(d78739a7694769882b7e47ea5ac08a10)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="d78739a7694769882b7e47ea5ac08a10" Dec 03 13:54:17.006075 master-0 kubenswrapper[4808]: I1203 13:54:17.005974 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:17.006447 master-0 kubenswrapper[4808]: E1203 13:54:17.006180 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:18.679143 master-0 kubenswrapper[4808]: I1203 13:54:18.679068 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5stk"] Dec 03 13:54:18.679888 master-0 kubenswrapper[4808]: I1203 13:54:18.679697 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-controller" containerID="cri-o://0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" gracePeriod=30 Dec 03 13:54:18.679888 master-0 kubenswrapper[4808]: I1203 13:54:18.679742 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="nbdb" containerID="cri-o://6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" gracePeriod=30 Dec 03 13:54:18.679888 master-0 kubenswrapper[4808]: I1203 13:54:18.679847 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-node" containerID="cri-o://402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" gracePeriod=30 Dec 03 13:54:18.680029 master-0 kubenswrapper[4808]: I1203 13:54:18.679791 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="sbdb" containerID="cri-o://4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" gracePeriod=30 Dec 03 13:54:18.680029 master-0 kubenswrapper[4808]: I1203 13:54:18.679911 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-acl-logging" containerID="cri-o://ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" gracePeriod=30 Dec 03 13:54:18.680029 master-0 kubenswrapper[4808]: I1203 13:54:18.679957 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" gracePeriod=30 Dec 03 13:54:18.680029 master-0 kubenswrapper[4808]: I1203 13:54:18.679938 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="northd" containerID="cri-o://1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" gracePeriod=30 Dec 03 13:54:18.707732 master-0 kubenswrapper[4808]: I1203 13:54:18.706180 4808 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" probeResult="failure" output="" Dec 03 13:54:18.712645 master-0 kubenswrapper[4808]: I1203 13:54:18.712518 4808 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" containerID="cri-o://51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" gracePeriod=30 Dec 03 13:54:18.883602 master-0 kubenswrapper[4808]: I1203 13:54:18.883539 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-ovn-metrics/0.log" Dec 03 13:54:18.884050 master-0 kubenswrapper[4808]: I1203 13:54:18.884017 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-node/0.log" Dec 03 13:54:18.884559 master-0 kubenswrapper[4808]: I1203 13:54:18.884522 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-acl-logging/0.log" Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.884999 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-controller/0.log" Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885461 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" exitCode=0 Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885483 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" exitCode=143 Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885491 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" exitCode=143 Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885500 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" exitCode=143 Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885507 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" exitCode=143 Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885557 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216"} Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885597 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0"} Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885608 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228"} Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885617 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561"} Dec 03 13:54:18.886336 master-0 kubenswrapper[4808]: I1203 13:54:18.885625 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a"} Dec 03 13:54:19.006486 master-0 kubenswrapper[4808]: I1203 13:54:19.006109 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:19.006486 master-0 kubenswrapper[4808]: E1203 13:54:19.006363 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:19.715670 master-0 kubenswrapper[4808]: I1203 13:54:19.715620 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovnkube-controller/0.log" Dec 03 13:54:19.717758 master-0 kubenswrapper[4808]: I1203 13:54:19.717718 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-ovn-metrics/0.log" Dec 03 13:54:19.718411 master-0 kubenswrapper[4808]: I1203 13:54:19.718384 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-node/0.log" Dec 03 13:54:19.719141 master-0 kubenswrapper[4808]: I1203 13:54:19.719114 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-acl-logging/0.log" Dec 03 13:54:19.720027 master-0 kubenswrapper[4808]: I1203 13:54:19.719974 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-controller/0.log" Dec 03 13:54:19.720724 master-0 kubenswrapper[4808]: I1203 13:54:19.720679 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:54:19.893247 master-0 kubenswrapper[4808]: I1203 13:54:19.893080 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovnkube-controller/0.log" Dec 03 13:54:19.896090 master-0 kubenswrapper[4808]: I1203 13:54:19.896062 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-ovn-metrics/0.log" Dec 03 13:54:19.896828 master-0 kubenswrapper[4808]: I1203 13:54:19.896805 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/kube-rbac-proxy-node/0.log" Dec 03 13:54:19.897368 master-0 kubenswrapper[4808]: I1203 13:54:19.897330 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-acl-logging/0.log" Dec 03 13:54:19.898367 master-0 kubenswrapper[4808]: I1203 13:54:19.898334 4808 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-m5stk_d11fa67d-4912-4004-af20-4f88f36e2b80/ovn-controller/0.log" Dec 03 13:54:19.900422 master-0 kubenswrapper[4808]: I1203 13:54:19.900341 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" exitCode=1 Dec 03 13:54:19.900422 master-0 kubenswrapper[4808]: I1203 13:54:19.900419 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" exitCode=0 Dec 03 13:54:19.900422 master-0 kubenswrapper[4808]: I1203 13:54:19.900433 4808 generic.go:334] "Generic (PLEG): container finished" podID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" exitCode=0 Dec 03 13:54:19.900651 master-0 kubenswrapper[4808]: I1203 13:54:19.900468 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191"} Dec 03 13:54:19.900651 master-0 kubenswrapper[4808]: I1203 13:54:19.900519 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f"} Dec 03 13:54:19.900651 master-0 kubenswrapper[4808]: I1203 13:54:19.900537 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4"} Dec 03 13:54:19.900651 master-0 kubenswrapper[4808]: I1203 13:54:19.900551 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" event={"ID":"d11fa67d-4912-4004-af20-4f88f36e2b80","Type":"ContainerDied","Data":"8ad5c6f42b048a7109c8dc8e6505bc10434b731d09b1f639a47358834f2d9395"} Dec 03 13:54:19.900651 master-0 kubenswrapper[4808]: I1203 13:54:19.900578 4808 scope.go:117] "RemoveContainer" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" Dec 03 13:54:19.901235 master-0 kubenswrapper[4808]: I1203 13:54:19.900796 4808 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m5stk" Dec 03 13:54:19.906138 master-0 kubenswrapper[4808]: I1203 13:54:19.906097 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906218 master-0 kubenswrapper[4808]: I1203 13:54:19.906147 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906218 master-0 kubenswrapper[4808]: I1203 13:54:19.906180 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906218 master-0 kubenswrapper[4808]: I1203 13:54:19.906211 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906368 master-0 kubenswrapper[4808]: I1203 13:54:19.906236 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906368 master-0 kubenswrapper[4808]: I1203 13:54:19.906281 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906368 master-0 kubenswrapper[4808]: I1203 13:54:19.906307 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906368 master-0 kubenswrapper[4808]: I1203 13:54:19.906295 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906368 master-0 kubenswrapper[4808]: I1203 13:54:19.906352 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2m6s\" (UniqueName: \"kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906345 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906411 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906417 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906402 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906390 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906451 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log" (OuterVolumeSpecName: "node-log") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906365 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906378 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash" (OuterVolumeSpecName: "host-slash") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906513 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906559 master-0 kubenswrapper[4808]: I1203 13:54:19.906551 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906594 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906637 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906676 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906707 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906742 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906769 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906800 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906828 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906856 4808 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib\") pod \"d11fa67d-4912-4004-af20-4f88f36e2b80\" (UID: \"d11fa67d-4912-4004-af20-4f88f36e2b80\") " Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906744 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.906893 master-0 kubenswrapper[4808]: I1203 13:54:19.906791 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.906819 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.906831 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.906859 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.906857 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket" (OuterVolumeSpecName: "log-socket") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.907172 4808 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-node-log\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.907198 4808 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-systemd-units\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907210 master-0 kubenswrapper[4808]: I1203 13:54:19.907218 4808 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-ovn\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907234 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907250 4808 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907288 4808 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-log-socket\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907304 4808 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907321 4808 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-kubelet\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907336 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-netns\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907350 4808 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907369 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907366 4808 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907425 4808 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907441 4808 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-slash\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907454 4808 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907484 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:54:19.907554 master-0 kubenswrapper[4808]: I1203 13:54:19.907530 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:54:19.915648 master-0 kubenswrapper[4808]: I1203 13:54:19.915534 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s" (OuterVolumeSpecName: "kube-api-access-f2m6s") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "kube-api-access-f2m6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:54:19.917957 master-0 kubenswrapper[4808]: I1203 13:54:19.917877 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:54:19.918254 master-0 kubenswrapper[4808]: I1203 13:54:19.917978 4808 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d11fa67d-4912-4004-af20-4f88f36e2b80" (UID: "d11fa67d-4912-4004-af20-4f88f36e2b80"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:54:19.918254 master-0 kubenswrapper[4808]: I1203 13:54:19.918152 4808 scope.go:117] "RemoveContainer" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" Dec 03 13:54:19.935349 master-0 kubenswrapper[4808]: I1203 13:54:19.935286 4808 scope.go:117] "RemoveContainer" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" Dec 03 13:54:19.951979 master-0 kubenswrapper[4808]: I1203 13:54:19.951923 4808 scope.go:117] "RemoveContainer" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" Dec 03 13:54:19.967423 master-0 kubenswrapper[4808]: I1203 13:54:19.967358 4808 scope.go:117] "RemoveContainer" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" Dec 03 13:54:19.982029 master-0 kubenswrapper[4808]: I1203 13:54:19.981966 4808 scope.go:117] "RemoveContainer" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.007627 4808 scope.go:117] "RemoveContainer" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008842 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008886 4808 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008901 4808 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d11fa67d-4912-4004-af20-4f88f36e2b80-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008910 4808 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2m6s\" (UniqueName: \"kubernetes.io/projected/d11fa67d-4912-4004-af20-4f88f36e2b80-kube-api-access-f2m6s\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008922 4808 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d11fa67d-4912-4004-af20-4f88f36e2b80-run-systemd\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.009780 master-0 kubenswrapper[4808]: I1203 13:54:20.008931 4808 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d11fa67d-4912-4004-af20-4f88f36e2b80-env-overrides\") on node \"master-0\" DevicePath \"\"" Dec 03 13:54:20.028136 master-0 kubenswrapper[4808]: I1203 13:54:20.027385 4808 scope.go:117] "RemoveContainer" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" Dec 03 13:54:20.042128 master-0 kubenswrapper[4808]: I1203 13:54:20.042065 4808 scope.go:117] "RemoveContainer" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" Dec 03 13:54:20.056512 master-0 kubenswrapper[4808]: I1203 13:54:20.056429 4808 scope.go:117] "RemoveContainer" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" Dec 03 13:54:20.057919 master-0 kubenswrapper[4808]: E1203 13:54:20.057235 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": container with ID starting with 51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191 not found: ID does not exist" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" Dec 03 13:54:20.057919 master-0 kubenswrapper[4808]: I1203 13:54:20.057305 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191"} err="failed to get container status \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": rpc error: code = NotFound desc = could not find container \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": container with ID starting with 51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191 not found: ID does not exist" Dec 03 13:54:20.057919 master-0 kubenswrapper[4808]: I1203 13:54:20.057428 4808 scope.go:117] "RemoveContainer" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" Dec 03 13:54:20.058769 master-0 kubenswrapper[4808]: E1203 13:54:20.058571 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": container with ID starting with 4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216 not found: ID does not exist" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" Dec 03 13:54:20.058769 master-0 kubenswrapper[4808]: I1203 13:54:20.058610 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216"} err="failed to get container status \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": rpc error: code = NotFound desc = could not find container \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": container with ID starting with 4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216 not found: ID does not exist" Dec 03 13:54:20.058769 master-0 kubenswrapper[4808]: I1203 13:54:20.058631 4808 scope.go:117] "RemoveContainer" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" Dec 03 13:54:20.060303 master-0 kubenswrapper[4808]: E1203 13:54:20.059849 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": container with ID starting with 6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f not found: ID does not exist" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" Dec 03 13:54:20.060303 master-0 kubenswrapper[4808]: I1203 13:54:20.059875 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f"} err="failed to get container status \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": rpc error: code = NotFound desc = could not find container \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": container with ID starting with 6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f not found: ID does not exist" Dec 03 13:54:20.060303 master-0 kubenswrapper[4808]: I1203 13:54:20.059896 4808 scope.go:117] "RemoveContainer" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" Dec 03 13:54:20.061199 master-0 kubenswrapper[4808]: E1203 13:54:20.061119 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": container with ID starting with 1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4 not found: ID does not exist" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" Dec 03 13:54:20.061345 master-0 kubenswrapper[4808]: I1203 13:54:20.061224 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4"} err="failed to get container status \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": rpc error: code = NotFound desc = could not find container \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": container with ID starting with 1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4 not found: ID does not exist" Dec 03 13:54:20.061711 master-0 kubenswrapper[4808]: I1203 13:54:20.061667 4808 scope.go:117] "RemoveContainer" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" Dec 03 13:54:20.062248 master-0 kubenswrapper[4808]: E1203 13:54:20.062202 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": container with ID starting with 49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0 not found: ID does not exist" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" Dec 03 13:54:20.062540 master-0 kubenswrapper[4808]: I1203 13:54:20.062242 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0"} err="failed to get container status \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": rpc error: code = NotFound desc = could not find container \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": container with ID starting with 49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0 not found: ID does not exist" Dec 03 13:54:20.062540 master-0 kubenswrapper[4808]: I1203 13:54:20.062281 4808 scope.go:117] "RemoveContainer" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" Dec 03 13:54:20.063275 master-0 kubenswrapper[4808]: E1203 13:54:20.062933 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": container with ID starting with 402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228 not found: ID does not exist" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" Dec 03 13:54:20.063353 master-0 kubenswrapper[4808]: I1203 13:54:20.063286 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228"} err="failed to get container status \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": rpc error: code = NotFound desc = could not find container \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": container with ID starting with 402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228 not found: ID does not exist" Dec 03 13:54:20.063353 master-0 kubenswrapper[4808]: I1203 13:54:20.063315 4808 scope.go:117] "RemoveContainer" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" Dec 03 13:54:20.064059 master-0 kubenswrapper[4808]: E1203 13:54:20.064036 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": container with ID starting with ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561 not found: ID does not exist" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" Dec 03 13:54:20.064358 master-0 kubenswrapper[4808]: I1203 13:54:20.064063 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561"} err="failed to get container status \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": rpc error: code = NotFound desc = could not find container \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": container with ID starting with ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561 not found: ID does not exist" Dec 03 13:54:20.064358 master-0 kubenswrapper[4808]: I1203 13:54:20.064081 4808 scope.go:117] "RemoveContainer" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" Dec 03 13:54:20.065758 master-0 kubenswrapper[4808]: E1203 13:54:20.065656 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": container with ID starting with 0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a not found: ID does not exist" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" Dec 03 13:54:20.065758 master-0 kubenswrapper[4808]: I1203 13:54:20.065698 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a"} err="failed to get container status \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": rpc error: code = NotFound desc = could not find container \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": container with ID starting with 0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a not found: ID does not exist" Dec 03 13:54:20.065758 master-0 kubenswrapper[4808]: I1203 13:54:20.065717 4808 scope.go:117] "RemoveContainer" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" Dec 03 13:54:20.066518 master-0 kubenswrapper[4808]: E1203 13:54:20.066446 4808 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": container with ID starting with f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8 not found: ID does not exist" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" Dec 03 13:54:20.066742 master-0 kubenswrapper[4808]: I1203 13:54:20.066530 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8"} err="failed to get container status \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": rpc error: code = NotFound desc = could not find container \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": container with ID starting with f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8 not found: ID does not exist" Dec 03 13:54:20.066742 master-0 kubenswrapper[4808]: I1203 13:54:20.066577 4808 scope.go:117] "RemoveContainer" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" Dec 03 13:54:20.067079 master-0 kubenswrapper[4808]: I1203 13:54:20.067046 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191"} err="failed to get container status \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": rpc error: code = NotFound desc = could not find container \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": container with ID starting with 51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191 not found: ID does not exist" Dec 03 13:54:20.067193 master-0 kubenswrapper[4808]: I1203 13:54:20.067080 4808 scope.go:117] "RemoveContainer" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" Dec 03 13:54:20.067548 master-0 kubenswrapper[4808]: I1203 13:54:20.067511 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216"} err="failed to get container status \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": rpc error: code = NotFound desc = could not find container \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": container with ID starting with 4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216 not found: ID does not exist" Dec 03 13:54:20.067618 master-0 kubenswrapper[4808]: I1203 13:54:20.067548 4808 scope.go:117] "RemoveContainer" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" Dec 03 13:54:20.067932 master-0 kubenswrapper[4808]: I1203 13:54:20.067904 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f"} err="failed to get container status \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": rpc error: code = NotFound desc = could not find container \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": container with ID starting with 6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f not found: ID does not exist" Dec 03 13:54:20.067932 master-0 kubenswrapper[4808]: I1203 13:54:20.067929 4808 scope.go:117] "RemoveContainer" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" Dec 03 13:54:20.068492 master-0 kubenswrapper[4808]: I1203 13:54:20.068412 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4"} err="failed to get container status \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": rpc error: code = NotFound desc = could not find container \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": container with ID starting with 1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4 not found: ID does not exist" Dec 03 13:54:20.068492 master-0 kubenswrapper[4808]: I1203 13:54:20.068467 4808 scope.go:117] "RemoveContainer" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" Dec 03 13:54:20.069293 master-0 kubenswrapper[4808]: I1203 13:54:20.069199 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0"} err="failed to get container status \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": rpc error: code = NotFound desc = could not find container \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": container with ID starting with 49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0 not found: ID does not exist" Dec 03 13:54:20.069293 master-0 kubenswrapper[4808]: I1203 13:54:20.069232 4808 scope.go:117] "RemoveContainer" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" Dec 03 13:54:20.069736 master-0 kubenswrapper[4808]: I1203 13:54:20.069631 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228"} err="failed to get container status \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": rpc error: code = NotFound desc = could not find container \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": container with ID starting with 402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228 not found: ID does not exist" Dec 03 13:54:20.069736 master-0 kubenswrapper[4808]: I1203 13:54:20.069657 4808 scope.go:117] "RemoveContainer" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" Dec 03 13:54:20.070074 master-0 kubenswrapper[4808]: I1203 13:54:20.070041 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561"} err="failed to get container status \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": rpc error: code = NotFound desc = could not find container \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": container with ID starting with ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561 not found: ID does not exist" Dec 03 13:54:20.070633 master-0 kubenswrapper[4808]: I1203 13:54:20.070073 4808 scope.go:117] "RemoveContainer" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" Dec 03 13:54:20.070741 master-0 kubenswrapper[4808]: I1203 13:54:20.070709 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a"} err="failed to get container status \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": rpc error: code = NotFound desc = could not find container \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": container with ID starting with 0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a not found: ID does not exist" Dec 03 13:54:20.070741 master-0 kubenswrapper[4808]: I1203 13:54:20.070731 4808 scope.go:117] "RemoveContainer" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" Dec 03 13:54:20.072409 master-0 kubenswrapper[4808]: I1203 13:54:20.071075 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8"} err="failed to get container status \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": rpc error: code = NotFound desc = could not find container \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": container with ID starting with f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8 not found: ID does not exist" Dec 03 13:54:20.072409 master-0 kubenswrapper[4808]: I1203 13:54:20.071115 4808 scope.go:117] "RemoveContainer" containerID="51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191" Dec 03 13:54:20.072940 master-0 kubenswrapper[4808]: I1203 13:54:20.072892 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191"} err="failed to get container status \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": rpc error: code = NotFound desc = could not find container \"51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191\": container with ID starting with 51860155e7c8e8b24e87e974327819c730c262ccdd1903ea136cc9b819ea8191 not found: ID does not exist" Dec 03 13:54:20.072940 master-0 kubenswrapper[4808]: I1203 13:54:20.072924 4808 scope.go:117] "RemoveContainer" containerID="4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216" Dec 03 13:54:20.073768 master-0 kubenswrapper[4808]: I1203 13:54:20.073606 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216"} err="failed to get container status \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": rpc error: code = NotFound desc = could not find container \"4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216\": container with ID starting with 4e278a6e39acce22dac9f544e7fd8f3119a2a882db1b7fbe7df90e1238ebe216 not found: ID does not exist" Dec 03 13:54:20.073768 master-0 kubenswrapper[4808]: I1203 13:54:20.073660 4808 scope.go:117] "RemoveContainer" containerID="6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f" Dec 03 13:54:20.074324 master-0 kubenswrapper[4808]: I1203 13:54:20.074218 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f"} err="failed to get container status \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": rpc error: code = NotFound desc = could not find container \"6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f\": container with ID starting with 6ad0b14a672b2ba2af4cbae01325db094855289a28b963cf72590fcd4390458f not found: ID does not exist" Dec 03 13:54:20.074324 master-0 kubenswrapper[4808]: I1203 13:54:20.074304 4808 scope.go:117] "RemoveContainer" containerID="1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4" Dec 03 13:54:20.074964 master-0 kubenswrapper[4808]: I1203 13:54:20.074852 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4"} err="failed to get container status \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": rpc error: code = NotFound desc = could not find container \"1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4\": container with ID starting with 1a3dac5419172aae47a2ba91e6370f6fc109e0cc4bc19c23cba6bc1b95525da4 not found: ID does not exist" Dec 03 13:54:20.074964 master-0 kubenswrapper[4808]: I1203 13:54:20.074883 4808 scope.go:117] "RemoveContainer" containerID="49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0" Dec 03 13:54:20.075454 master-0 kubenswrapper[4808]: I1203 13:54:20.075420 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0"} err="failed to get container status \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": rpc error: code = NotFound desc = could not find container \"49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0\": container with ID starting with 49f9f73782c47d5d28f87cc439bba64051ead5ff1a1e429bdddcbb4d236afee0 not found: ID does not exist" Dec 03 13:54:20.075454 master-0 kubenswrapper[4808]: I1203 13:54:20.075452 4808 scope.go:117] "RemoveContainer" containerID="402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228" Dec 03 13:54:20.075978 master-0 kubenswrapper[4808]: I1203 13:54:20.075866 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228"} err="failed to get container status \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": rpc error: code = NotFound desc = could not find container \"402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228\": container with ID starting with 402c33e9cc5a4562df3b644d6ad49de97dd73427e386cd16f28bf00be0dba228 not found: ID does not exist" Dec 03 13:54:20.075978 master-0 kubenswrapper[4808]: I1203 13:54:20.075893 4808 scope.go:117] "RemoveContainer" containerID="ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561" Dec 03 13:54:20.076440 master-0 kubenswrapper[4808]: I1203 13:54:20.076406 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561"} err="failed to get container status \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": rpc error: code = NotFound desc = could not find container \"ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561\": container with ID starting with ee26ffd0a3734ca8e40ebf2585ffce20414757ed07aefd1ef721d00161d59561 not found: ID does not exist" Dec 03 13:54:20.076440 master-0 kubenswrapper[4808]: I1203 13:54:20.076438 4808 scope.go:117] "RemoveContainer" containerID="0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a" Dec 03 13:54:20.077009 master-0 kubenswrapper[4808]: I1203 13:54:20.076976 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a"} err="failed to get container status \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": rpc error: code = NotFound desc = could not find container \"0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a\": container with ID starting with 0fe3d73e5dd3371ea7a93a7062c90de37bb5d1d842293e89c2b0bfb77eeeac0a not found: ID does not exist" Dec 03 13:54:20.077009 master-0 kubenswrapper[4808]: I1203 13:54:20.077005 4808 scope.go:117] "RemoveContainer" containerID="f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8" Dec 03 13:54:20.077737 master-0 kubenswrapper[4808]: I1203 13:54:20.077664 4808 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8"} err="failed to get container status \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": rpc error: code = NotFound desc = could not find container \"f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8\": container with ID starting with f1e72e37c626ba1c32f23f859255bd0d87920b477100a353912e4403321adda8 not found: ID does not exist" Dec 03 13:54:20.313341 master-0 kubenswrapper[4808]: I1203 13:54:20.313161 4808 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5stk"] Dec 03 13:54:20.317434 master-0 kubenswrapper[4808]: I1203 13:54:20.317370 4808 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m5stk"] Dec 03 13:54:21.005238 master-0 kubenswrapper[4808]: I1203 13:54:21.005149 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:21.006162 master-0 kubenswrapper[4808]: E1203 13:54:21.005621 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:21.007651 master-0 kubenswrapper[4808]: E1203 13:54:21.007609 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:21.009610 master-0 kubenswrapper[4808]: I1203 13:54:21.009564 4808 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" path="/var/lib/kubelet/pods/d11fa67d-4912-4004-af20-4f88f36e2b80/volumes" Dec 03 13:54:23.005952 master-0 kubenswrapper[4808]: I1203 13:54:23.005863 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:23.006688 master-0 kubenswrapper[4808]: E1203 13:54:23.006047 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:25.005818 master-0 kubenswrapper[4808]: I1203 13:54:25.005697 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:25.006517 master-0 kubenswrapper[4808]: E1203 13:54:25.005901 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:26.009509 master-0 kubenswrapper[4808]: E1203 13:54:26.009400 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:27.005869 master-0 kubenswrapper[4808]: I1203 13:54:27.005752 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:27.006231 master-0 kubenswrapper[4808]: E1203 13:54:27.006017 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:29.005333 master-0 kubenswrapper[4808]: I1203 13:54:29.005223 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:29.006138 master-0 kubenswrapper[4808]: E1203 13:54:29.005496 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:29.006138 master-0 kubenswrapper[4808]: I1203 13:54:29.005815 4808 scope.go:117] "RemoveContainer" containerID="9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2" Dec 03 13:54:29.937992 master-0 kubenswrapper[4808]: I1203 13:54:29.937611 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c"} Dec 03 13:54:30.032525 master-0 kubenswrapper[4808]: I1203 13:54:30.032459 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-c8csx"] Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032638 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kubecfg-setup" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032659 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kubecfg-setup" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032669 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="sbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032676 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="sbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032684 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-node" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032692 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-node" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032701 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-acl-logging" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032708 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-acl-logging" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032717 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032724 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032732 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032743 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032751 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="nbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032758 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="nbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032769 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="northd" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032777 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="northd" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: E1203 13:54:30.032785 4808 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032795 4808 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032846 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="northd" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032857 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032867 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovnkube-controller" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032874 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="ovn-acl-logging" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032881 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-ovn-metrics" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032888 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="nbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032895 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="sbdb" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.032902 4808 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fa67d-4912-4004-af20-4f88f36e2b80" containerName="kube-rbac-proxy-node" Dec 03 13:54:30.033368 master-0 kubenswrapper[4808]: I1203 13:54:30.033130 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-txl6b"] Dec 03 13:54:30.034525 master-0 kubenswrapper[4808]: I1203 13:54:30.033373 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.034525 master-0 kubenswrapper[4808]: I1203 13:54:30.034067 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.036351 master-0 kubenswrapper[4808]: I1203 13:54:30.036248 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 13:54:30.036664 master-0 kubenswrapper[4808]: I1203 13:54:30.036632 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 13:54:30.036879 master-0 kubenswrapper[4808]: I1203 13:54:30.036851 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 13:54:30.037204 master-0 kubenswrapper[4808]: I1203 13:54:30.037168 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 13:54:30.037315 master-0 kubenswrapper[4808]: I1203 13:54:30.037204 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 13:54:30.037367 master-0 kubenswrapper[4808]: I1203 13:54:30.037195 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 13:54:30.037808 master-0 kubenswrapper[4808]: I1203 13:54:30.037775 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 13:54:30.196504 master-0 kubenswrapper[4808]: I1203 13:54:30.196249 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196504 master-0 kubenswrapper[4808]: I1203 13:54:30.196370 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196516 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196599 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196640 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196672 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196696 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196718 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196748 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.196849 master-0 kubenswrapper[4808]: I1203 13:54:30.196771 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.196852 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.196899 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.196931 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.196969 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.196997 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.197028 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.197069 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197166 master-0 kubenswrapper[4808]: I1203 13:54:30.197099 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197185 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197312 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197398 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197431 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197450 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.197653 master-0 kubenswrapper[4808]: I1203 13:54:30.197468 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.298125 master-0 kubenswrapper[4808]: I1203 13:54:30.298040 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.298125 master-0 kubenswrapper[4808]: I1203 13:54:30.298100 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298125 master-0 kubenswrapper[4808]: I1203 13:54:30.298124 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298125 master-0 kubenswrapper[4808]: I1203 13:54:30.298151 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298178 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298231 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298288 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298309 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298370 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298395 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298407 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298431 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298454 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298476 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298508 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298538 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298573 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298595 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.298622 master-0 kubenswrapper[4808]: I1203 13:54:30.298561 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298619 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298723 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298759 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298707 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298829 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298893 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298798 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298778 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298899 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298477 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298938 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298937 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.299008 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.299003 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298954 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.298657 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299146 master-0 kubenswrapper[4808]: I1203 13:54:30.299081 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299755 master-0 kubenswrapper[4808]: I1203 13:54:30.299094 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299755 master-0 kubenswrapper[4808]: I1203 13:54:30.299099 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.299755 master-0 kubenswrapper[4808]: I1203 13:54:30.299394 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.300539 master-0 kubenswrapper[4808]: I1203 13:54:30.300134 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.300856 master-0 kubenswrapper[4808]: I1203 13:54:30.300665 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.300856 master-0 kubenswrapper[4808]: I1203 13:54:30.300692 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.304188 master-0 kubenswrapper[4808]: I1203 13:54:30.304131 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.304752 master-0 kubenswrapper[4808]: I1203 13:54:30.304704 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.424344 master-0 kubenswrapper[4808]: I1203 13:54:30.424198 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-pcchm"] Dec 03 13:54:30.425420 master-0 kubenswrapper[4808]: I1203 13:54:30.425338 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:30.425562 master-0 kubenswrapper[4808]: E1203 13:54:30.425511 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:30.501906 master-0 kubenswrapper[4808]: I1203 13:54:30.501477 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:30.520600 master-0 kubenswrapper[4808]: I1203 13:54:30.520419 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.521315 master-0 kubenswrapper[4808]: I1203 13:54:30.521233 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.602956 master-0 kubenswrapper[4808]: I1203 13:54:30.602822 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:30.653027 master-0 kubenswrapper[4808]: I1203 13:54:30.652915 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:54:30.660784 master-0 kubenswrapper[4808]: I1203 13:54:30.660732 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:30.672400 master-0 kubenswrapper[4808]: W1203 13:54:30.672300 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda583723_b3ad_4a6f_b586_09b739bd7f8c.slice/crio-8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1 WatchSource:0}: Error finding container 8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1: Status 404 returned error can't find the container with id 8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1 Dec 03 13:54:30.679185 master-0 kubenswrapper[4808]: W1203 13:54:30.679101 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77430348_b53a_4898_8047_be8bb542a0a7.slice/crio-dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c WatchSource:0}: Error finding container dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c: Status 404 returned error can't find the container with id dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c Dec 03 13:54:30.723063 master-0 kubenswrapper[4808]: E1203 13:54:30.722978 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:30.723063 master-0 kubenswrapper[4808]: E1203 13:54:30.723036 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:30.723063 master-0 kubenswrapper[4808]: E1203 13:54:30.723056 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:30.723571 master-0 kubenswrapper[4808]: E1203 13:54:30.723143 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:54:31.223115834 +0000 UTC m=+220.663413769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:30.942015 master-0 kubenswrapper[4808]: I1203 13:54:30.941910 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1"} Dec 03 13:54:30.942960 master-0 kubenswrapper[4808]: I1203 13:54:30.942903 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c"} Dec 03 13:54:31.005798 master-0 kubenswrapper[4808]: I1203 13:54:31.005707 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:31.006163 master-0 kubenswrapper[4808]: E1203 13:54:31.005909 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:31.010240 master-0 kubenswrapper[4808]: E1203 13:54:31.010178 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:31.308289 master-0 kubenswrapper[4808]: I1203 13:54:31.308193 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:31.309002 master-0 kubenswrapper[4808]: E1203 13:54:31.308416 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:31.309002 master-0 kubenswrapper[4808]: E1203 13:54:31.308443 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:31.309002 master-0 kubenswrapper[4808]: E1203 13:54:31.308460 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:31.309002 master-0 kubenswrapper[4808]: E1203 13:54:31.308528 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:54:32.308506145 +0000 UTC m=+221.748804080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:31.948599 master-0 kubenswrapper[4808]: I1203 13:54:31.948520 4808 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="c92c50a11c2a662e5059d5ecc58bf830b95d8aca43091af67255e096313ccb46" exitCode=0 Dec 03 13:54:31.949110 master-0 kubenswrapper[4808]: I1203 13:54:31.948642 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"c92c50a11c2a662e5059d5ecc58bf830b95d8aca43091af67255e096313ccb46"} Dec 03 13:54:31.950905 master-0 kubenswrapper[4808]: I1203 13:54:31.950848 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"55c650b6735d1149a2afda93b8298292e086e4e3f1a7fa967236b4dd8824447e"} Dec 03 13:54:31.950905 master-0 kubenswrapper[4808]: I1203 13:54:31.950899 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"de476856ca615fd84083d86322de4864227c26e4b7ce9b3ec5af43a55d69be84"} Dec 03 13:54:32.006853 master-0 kubenswrapper[4808]: I1203 13:54:32.005721 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:32.006853 master-0 kubenswrapper[4808]: E1203 13:54:32.005942 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:32.320015 master-0 kubenswrapper[4808]: I1203 13:54:32.319879 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:32.320565 master-0 kubenswrapper[4808]: E1203 13:54:32.320067 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:32.320565 master-0 kubenswrapper[4808]: E1203 13:54:32.320090 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:32.320565 master-0 kubenswrapper[4808]: E1203 13:54:32.320104 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:32.320565 master-0 kubenswrapper[4808]: E1203 13:54:32.320162 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:54:34.320141886 +0000 UTC m=+223.760439831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959775 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"2eb1c0b87c5115a6c77880a0f6ea1d1e7c19c3d4b6adfbc9b213cb39d18f5119"} Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959917 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"682ed814488a1da09a97f46ae8065a12c156a25ca7d9ebf8ee99e80832d404f9"} Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959933 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"f9012451e143c661acf43d4a684e09fb51017c86e48f95ec5cedea2d66519495"} Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959947 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"9788dbc1822077a1345e0665b546240f6ae71123d0574d85f2a1bbad5b369d94"} Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959960 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"511febd919fe51806b9e58c83ddbd24084a2ff41f70013d1f8cf1b73f8d1c121"} Dec 03 13:54:32.959962 master-0 kubenswrapper[4808]: I1203 13:54:32.959973 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"9f5a54c31b58a99af81ae65e75ae6c435f6f05ae1f2ddaef3530aab147be46cc"} Dec 03 13:54:33.005441 master-0 kubenswrapper[4808]: I1203 13:54:33.005340 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:33.005749 master-0 kubenswrapper[4808]: E1203 13:54:33.005521 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:34.005607 master-0 kubenswrapper[4808]: I1203 13:54:34.005505 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:34.006569 master-0 kubenswrapper[4808]: E1203 13:54:34.005689 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:34.738153 master-0 kubenswrapper[4808]: I1203 13:54:34.738073 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:34.738781 master-0 kubenswrapper[4808]: E1203 13:54:34.738316 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:34.738781 master-0 kubenswrapper[4808]: E1203 13:54:34.738378 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:34.738781 master-0 kubenswrapper[4808]: E1203 13:54:34.738392 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:34.739136 master-0 kubenswrapper[4808]: E1203 13:54:34.739076 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:54:38.738439747 +0000 UTC m=+228.178737682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:35.006044 master-0 kubenswrapper[4808]: I1203 13:54:35.005760 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:35.006782 master-0 kubenswrapper[4808]: E1203 13:54:35.006095 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:35.976245 master-0 kubenswrapper[4808]: I1203 13:54:35.976144 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"210c6a8d2e386e655950675cf053111f6b97278ea90880c4d559e45206b5f80e"} Dec 03 13:54:36.005766 master-0 kubenswrapper[4808]: I1203 13:54:36.005647 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:36.006100 master-0 kubenswrapper[4808]: E1203 13:54:36.005858 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:36.012013 master-0 kubenswrapper[4808]: E1203 13:54:36.011932 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:37.006137 master-0 kubenswrapper[4808]: I1203 13:54:37.005792 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:37.006137 master-0 kubenswrapper[4808]: E1203 13:54:37.006043 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:37.990968 master-0 kubenswrapper[4808]: I1203 13:54:37.989789 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"23cec35f733927117a13c3db04a2902bbdba779ffa181a6493078d6d61e24067"} Dec 03 13:54:37.990968 master-0 kubenswrapper[4808]: I1203 13:54:37.990679 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:38.004978 master-0 kubenswrapper[4808]: I1203 13:54:38.004868 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:38.005995 master-0 kubenswrapper[4808]: E1203 13:54:38.005061 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:38.018503 master-0 kubenswrapper[4808]: I1203 13:54:38.018390 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:38.464506 master-0 kubenswrapper[4808]: I1203 13:54:38.463617 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-c8csx" podStartSLOduration=20.463538617 podStartE2EDuration="20.463538617s" podCreationTimestamp="2025-12-03 13:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:54:31.987280022 +0000 UTC m=+221.427577947" watchObservedRunningTime="2025-12-03 13:54:38.463538617 +0000 UTC m=+227.903836592" Dec 03 13:54:38.466347 master-0 kubenswrapper[4808]: I1203 13:54:38.466218 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" podStartSLOduration=19.466202575 podStartE2EDuration="19.466202575s" podCreationTimestamp="2025-12-03 13:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:54:38.462129446 +0000 UTC m=+227.902427421" watchObservedRunningTime="2025-12-03 13:54:38.466202575 +0000 UTC m=+227.906500540" Dec 03 13:54:38.780411 master-0 kubenswrapper[4808]: I1203 13:54:38.780108 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:38.780727 master-0 kubenswrapper[4808]: E1203 13:54:38.780381 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:38.780727 master-0 kubenswrapper[4808]: E1203 13:54:38.780522 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:38.780727 master-0 kubenswrapper[4808]: E1203 13:54:38.780545 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:38.780727 master-0 kubenswrapper[4808]: E1203 13:54:38.780659 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:54:46.780629322 +0000 UTC m=+236.220927407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:38.994855 master-0 kubenswrapper[4808]: I1203 13:54:38.994708 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:38.994855 master-0 kubenswrapper[4808]: I1203 13:54:38.994781 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:39.005718 master-0 kubenswrapper[4808]: I1203 13:54:39.005636 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:39.005948 master-0 kubenswrapper[4808]: E1203 13:54:39.005800 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:39.020989 master-0 kubenswrapper[4808]: I1203 13:54:39.020880 4808 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:54:39.273356 master-0 kubenswrapper[4808]: I1203 13:54:39.273194 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ch7xd"] Dec 03 13:54:39.276491 master-0 kubenswrapper[4808]: I1203 13:54:39.276441 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-pcchm"] Dec 03 13:54:39.276612 master-0 kubenswrapper[4808]: I1203 13:54:39.276597 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:39.276754 master-0 kubenswrapper[4808]: E1203 13:54:39.276712 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:39.997471 master-0 kubenswrapper[4808]: I1203 13:54:39.997392 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:39.997750 master-0 kubenswrapper[4808]: E1203 13:54:39.997539 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:41.005500 master-0 kubenswrapper[4808]: I1203 13:54:41.005389 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:41.008486 master-0 kubenswrapper[4808]: E1203 13:54:41.007503 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:41.012907 master-0 kubenswrapper[4808]: E1203 13:54:41.012812 4808 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 13:54:42.005504 master-0 kubenswrapper[4808]: I1203 13:54:42.005441 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:42.009086 master-0 kubenswrapper[4808]: E1203 13:54:42.006421 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:43.005887 master-0 kubenswrapper[4808]: I1203 13:54:43.005806 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:43.006219 master-0 kubenswrapper[4808]: E1203 13:54:43.006012 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:44.005165 master-0 kubenswrapper[4808]: I1203 13:54:44.004941 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:44.006013 master-0 kubenswrapper[4808]: E1203 13:54:44.005172 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:45.005942 master-0 kubenswrapper[4808]: I1203 13:54:45.005844 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:45.006671 master-0 kubenswrapper[4808]: E1203 13:54:45.005988 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 13:54:46.004983 master-0 kubenswrapper[4808]: I1203 13:54:46.004864 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:46.005404 master-0 kubenswrapper[4808]: E1203 13:54:46.005080 4808 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 13:54:46.797079 master-0 kubenswrapper[4808]: I1203 13:54:46.796861 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:46.798023 master-0 kubenswrapper[4808]: E1203 13:54:46.797173 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 13:54:46.798023 master-0 kubenswrapper[4808]: E1203 13:54:46.797235 4808 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 13:54:46.798023 master-0 kubenswrapper[4808]: E1203 13:54:46.797254 4808 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:46.798023 master-0 kubenswrapper[4808]: E1203 13:54:46.797362 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 13:55:02.797339845 +0000 UTC m=+252.237637950 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 13:54:47.005281 master-0 kubenswrapper[4808]: I1203 13:54:47.005128 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:47.008345 master-0 kubenswrapper[4808]: I1203 13:54:47.007864 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 13:54:47.008463 master-0 kubenswrapper[4808]: I1203 13:54:47.008428 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 13:54:48.005594 master-0 kubenswrapper[4808]: I1203 13:54:48.005503 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:48.008092 master-0 kubenswrapper[4808]: I1203 13:54:48.008043 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 13:54:51.296617 master-0 kubenswrapper[4808]: I1203 13:54:51.296109 4808 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Dec 03 13:54:51.614511 master-0 kubenswrapper[4808]: I1203 13:54:51.614247 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8"] Dec 03 13:54:51.615155 master-0 kubenswrapper[4808]: I1203 13:54:51.614702 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.616863 master-0 kubenswrapper[4808]: I1203 13:54:51.616805 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 13:54:51.616939 master-0 kubenswrapper[4808]: I1203 13:54:51.616911 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 13:54:51.617048 master-0 kubenswrapper[4808]: I1203 13:54:51.616911 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:54:51.617537 master-0 kubenswrapper[4808]: I1203 13:54:51.617480 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.738540 master-0 kubenswrapper[4808]: I1203 13:54:51.738407 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.738540 master-0 kubenswrapper[4808]: I1203 13:54:51.738550 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.738969 master-0 kubenswrapper[4808]: I1203 13:54:51.738688 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.840036 master-0 kubenswrapper[4808]: I1203 13:54:51.839878 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.840036 master-0 kubenswrapper[4808]: I1203 13:54:51.840035 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.840557 master-0 kubenswrapper[4808]: I1203 13:54:51.840084 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.841429 master-0 kubenswrapper[4808]: I1203 13:54:51.841299 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.848553 master-0 kubenswrapper[4808]: I1203 13:54:51.848455 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.899577 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm"] Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.900101 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.900642 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl"] Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.901206 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903220 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm"] Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903787 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903820 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903846 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903879 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.903931 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.904906 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.905203 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.905552 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96"] Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.906164 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.906962 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn"] Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.907504 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.907682 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.908240 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.908446 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 13:54:51.909967 master-0 kubenswrapper[4808]: I1203 13:54:51.909817 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.911511 master-0 kubenswrapper[4808]: I1203 13:54:51.910039 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 13:54:51.921529 master-0 kubenswrapper[4808]: I1203 13:54:51.921474 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.921634 master-0 kubenswrapper[4808]: I1203 13:54:51.921589 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 13:54:51.921835 master-0 kubenswrapper[4808]: I1203 13:54:51.921799 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 13:54:51.922166 master-0 kubenswrapper[4808]: I1203 13:54:51.922138 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 13:54:51.922434 master-0 kubenswrapper[4808]: I1203 13:54:51.922405 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 13:54:51.922560 master-0 kubenswrapper[4808]: I1203 13:54:51.922487 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-7978bf889c-n64v4"] Dec 03 13:54:51.922695 master-0 kubenswrapper[4808]: I1203 13:54:51.922662 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:54:51.923757 master-0 kubenswrapper[4808]: I1203 13:54:51.923725 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:51.925641 master-0 kubenswrapper[4808]: I1203 13:54:51.925606 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74"] Dec 03 13:54:51.926026 master-0 kubenswrapper[4808]: I1203 13:54:51.925993 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:51.926526 master-0 kubenswrapper[4808]: I1203 13:54:51.926489 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz"] Dec 03 13:54:51.926879 master-0 kubenswrapper[4808]: I1203 13:54:51.926842 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:51.927079 master-0 kubenswrapper[4808]: I1203 13:54:51.927043 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 13:54:51.927283 master-0 kubenswrapper[4808]: I1203 13:54:51.927226 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p"] Dec 03 13:54:51.927375 master-0 kubenswrapper[4808]: I1203 13:54:51.927328 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 13:54:51.927593 master-0 kubenswrapper[4808]: I1203 13:54:51.927558 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:51.928006 master-0 kubenswrapper[4808]: I1203 13:54:51.927965 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:54:51.928474 master-0 kubenswrapper[4808]: I1203 13:54:51.928410 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:51.928688 master-0 kubenswrapper[4808]: I1203 13:54:51.928504 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5"] Dec 03 13:54:51.929066 master-0 kubenswrapper[4808]: I1203 13:54:51.929004 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 13:54:51.929725 master-0 kubenswrapper[4808]: I1203 13:54:51.929682 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2"] Dec 03 13:54:51.930466 master-0 kubenswrapper[4808]: I1203 13:54:51.930425 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:51.931082 master-0 kubenswrapper[4808]: I1203 13:54:51.930977 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:51.932901 master-0 kubenswrapper[4808]: I1203 13:54:51.932819 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 13:54:51.933223 master-0 kubenswrapper[4808]: I1203 13:54:51.933173 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz"] Dec 03 13:54:51.933361 master-0 kubenswrapper[4808]: I1203 13:54:51.933322 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 13:54:51.933533 master-0 kubenswrapper[4808]: I1203 13:54:51.933498 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 13:54:51.933671 master-0 kubenswrapper[4808]: I1203 13:54:51.933630 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:51.934010 master-0 kubenswrapper[4808]: I1203 13:54:51.933644 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 13:54:51.934128 master-0 kubenswrapper[4808]: I1203 13:54:51.934091 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 13:54:51.934176 master-0 kubenswrapper[4808]: I1203 13:54:51.934111 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 13:54:51.934176 master-0 kubenswrapper[4808]: I1203 13:54:51.933689 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 13:54:51.934176 master-0 kubenswrapper[4808]: I1203 13:54:51.933729 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 13:54:51.934395 master-0 kubenswrapper[4808]: I1203 13:54:51.934345 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.934469 master-0 kubenswrapper[4808]: I1203 13:54:51.934449 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 13:54:51.934516 master-0 kubenswrapper[4808]: I1203 13:54:51.934486 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 13:54:51.934559 master-0 kubenswrapper[4808]: I1203 13:54:51.934534 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 13:54:51.934559 master-0 kubenswrapper[4808]: I1203 13:54:51.934545 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.936463 master-0 kubenswrapper[4808]: I1203 13:54:51.936407 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8"] Dec 03 13:54:51.936898 master-0 kubenswrapper[4808]: I1203 13:54:51.936853 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 13:54:51.936898 master-0 kubenswrapper[4808]: I1203 13:54:51.936885 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 13:54:51.937176 master-0 kubenswrapper[4808]: I1203 13:54:51.937142 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 13:54:51.937291 master-0 kubenswrapper[4808]: I1203 13:54:51.937245 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 13:54:51.937291 master-0 kubenswrapper[4808]: I1203 13:54:51.937276 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 13:54:51.937374 master-0 kubenswrapper[4808]: I1203 13:54:51.937241 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:54:51.937374 master-0 kubenswrapper[4808]: I1203 13:54:51.937288 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.937539 master-0 kubenswrapper[4808]: I1203 13:54:51.937506 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:54:51.937539 master-0 kubenswrapper[4808]: I1203 13:54:51.937522 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 13:54:51.937672 master-0 kubenswrapper[4808]: I1203 13:54:51.937651 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 13:54:51.937719 master-0 kubenswrapper[4808]: I1203 13:54:51.937665 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 13:54:51.937719 master-0 kubenswrapper[4808]: I1203 13:54:51.937669 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 13:54:51.937894 master-0 kubenswrapper[4808]: I1203 13:54:51.937862 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:51.939361 master-0 kubenswrapper[4808]: I1203 13:54:51.939313 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb"] Dec 03 13:54:51.939808 master-0 kubenswrapper[4808]: I1203 13:54:51.939766 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9"] Dec 03 13:54:51.940143 master-0 kubenswrapper[4808]: I1203 13:54:51.940103 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:51.940887 master-0 kubenswrapper[4808]: I1203 13:54:51.940784 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:51.941231 master-0 kubenswrapper[4808]: I1203 13:54:51.939677 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 13:54:51.941341 master-0 kubenswrapper[4808]: I1203 13:54:51.941194 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 13:54:51.941503 master-0 kubenswrapper[4808]: I1203 13:54:51.941441 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.942509 master-0 kubenswrapper[4808]: I1203 13:54:51.942469 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5"] Dec 03 13:54:51.943312 master-0 kubenswrapper[4808]: I1203 13:54:51.943223 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.943397 master-0 kubenswrapper[4808]: I1203 13:54:51.943378 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 13:54:51.943759 master-0 kubenswrapper[4808]: I1203 13:54:51.943576 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:51.943759 master-0 kubenswrapper[4808]: I1203 13:54:51.943622 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 13:54:51.943759 master-0 kubenswrapper[4808]: I1203 13:54:51.943643 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 13:54:51.944201 master-0 kubenswrapper[4808]: I1203 13:54:51.943882 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 13:54:51.944553 master-0 kubenswrapper[4808]: I1203 13:54:51.944519 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 13:54:51.946007 master-0 kubenswrapper[4808]: I1203 13:54:51.945947 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p"] Dec 03 13:54:51.947036 master-0 kubenswrapper[4808]: I1203 13:54:51.946992 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:51.950702 master-0 kubenswrapper[4808]: I1203 13:54:51.950646 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 13:54:51.950949 master-0 kubenswrapper[4808]: I1203 13:54:51.950700 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 13:54:51.950949 master-0 kubenswrapper[4808]: I1203 13:54:51.950848 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 13:54:51.951055 master-0 kubenswrapper[4808]: I1203 13:54:51.951040 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 13:54:51.951152 master-0 kubenswrapper[4808]: I1203 13:54:51.951113 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 13:54:51.951863 master-0 kubenswrapper[4808]: I1203 13:54:51.951781 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 13:54:51.952176 master-0 kubenswrapper[4808]: I1203 13:54:51.952120 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 13:54:51.952622 master-0 kubenswrapper[4808]: I1203 13:54:51.952537 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 13:54:51.952815 master-0 kubenswrapper[4808]: I1203 13:54:51.952741 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 13:54:51.953009 master-0 kubenswrapper[4808]: I1203 13:54:51.952576 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 13:54:51.959543 master-0 kubenswrapper[4808]: I1203 13:54:51.959416 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 13:54:51.970413 master-0 kubenswrapper[4808]: I1203 13:54:51.970280 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 13:54:51.971957 master-0 kubenswrapper[4808]: I1203 13:54:51.970578 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 13:54:51.974539 master-0 kubenswrapper[4808]: I1203 13:54:51.974487 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8"] Dec 03 13:54:52.041380 master-0 kubenswrapper[4808]: I1203 13:54:52.041293 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.041380 master-0 kubenswrapper[4808]: I1203 13:54:52.041374 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041409 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041454 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041504 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041549 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.041656 master-0 kubenswrapper[4808]: I1203 13:54:52.041632 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041664 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041691 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041716 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041744 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041769 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041794 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041835 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.041903 master-0 kubenswrapper[4808]: I1203 13:54:52.041863 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.041888 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.042084 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.042142 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.042173 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.042202 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.042233 master-0 kubenswrapper[4808]: I1203 13:54:52.042223 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.042497 master-0 kubenswrapper[4808]: I1203 13:54:52.042295 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.042497 master-0 kubenswrapper[4808]: I1203 13:54:52.042385 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.042497 master-0 kubenswrapper[4808]: I1203 13:54:52.042469 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.042593 master-0 kubenswrapper[4808]: I1203 13:54:52.042510 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.042627 master-0 kubenswrapper[4808]: I1203 13:54:52.042584 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.042662 master-0 kubenswrapper[4808]: I1203 13:54:52.042623 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.042831 master-0 kubenswrapper[4808]: I1203 13:54:52.042743 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.042831 master-0 kubenswrapper[4808]: I1203 13:54:52.042829 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.042953 master-0 kubenswrapper[4808]: I1203 13:54:52.042896 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.042953 master-0 kubenswrapper[4808]: I1203 13:54:52.042924 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.042953 master-0 kubenswrapper[4808]: I1203 13:54:52.042943 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.042983 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.043007 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.043056 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.043090 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.043118 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.043140 master-0 kubenswrapper[4808]: I1203 13:54:52.043137 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043204 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043295 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043328 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043354 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043380 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043405 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043443 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043483 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043513 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.043548 master-0 kubenswrapper[4808]: I1203 13:54:52.043541 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:54:52.043891 master-0 kubenswrapper[4808]: I1203 13:54:52.043569 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.043891 master-0 kubenswrapper[4808]: I1203 13:54:52.043589 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.043891 master-0 kubenswrapper[4808]: I1203 13:54:52.043613 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.131629 master-0 kubenswrapper[4808]: I1203 13:54:52.126630 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm"] Dec 03 13:54:52.133894 master-0 kubenswrapper[4808]: I1203 13:54:52.133845 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl"] Dec 03 13:54:52.134795 master-0 kubenswrapper[4808]: I1203 13:54:52.134736 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm"] Dec 03 13:54:52.135636 master-0 kubenswrapper[4808]: I1203 13:54:52.135609 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz"] Dec 03 13:54:52.136528 master-0 kubenswrapper[4808]: I1203 13:54:52.136501 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p"] Dec 03 13:54:52.137288 master-0 kubenswrapper[4808]: I1203 13:54:52.137240 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:54:52.138108 master-0 kubenswrapper[4808]: I1203 13:54:52.138075 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn"] Dec 03 13:54:52.138858 master-0 kubenswrapper[4808]: I1203 13:54:52.138824 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-7978bf889c-n64v4"] Dec 03 13:54:52.139863 master-0 kubenswrapper[4808]: I1203 13:54:52.139638 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2"] Dec 03 13:54:52.140395 master-0 kubenswrapper[4808]: I1203 13:54:52.140308 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74"] Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144085 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144149 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144167 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144200 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.144166 master-0 kubenswrapper[4808]: I1203 13:54:52.144217 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144253 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144287 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144303 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144336 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144355 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144371 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144390 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144409 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144425 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144445 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144462 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144482 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.144685 master-0 kubenswrapper[4808]: I1203 13:54:52.144506 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144524 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144542 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144561 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144580 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144601 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144630 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144654 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144677 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144699 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144723 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144744 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: I1203 13:54:52.144763 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: E1203 13:54:52.144742 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: E1203 13:54:52.144810 4808 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: E1203 13:54:52.144886 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.644858972 +0000 UTC m=+242.085156897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:54:52.145445 master-0 kubenswrapper[4808]: E1203 13:54:52.144928 4808 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: E1203 13:54:52.144947 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.644917724 +0000 UTC m=+242.085215669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: E1203 13:54:52.144967 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: E1203 13:54:52.144984 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.644968295 +0000 UTC m=+242.085266230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: E1203 13:54:52.145039 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.645020507 +0000 UTC m=+242.085318522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.144787 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145141 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145172 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145220 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145252 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145300 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145320 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.146022 master-0 kubenswrapper[4808]: I1203 13:54:52.145347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145369 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145393 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145421 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145447 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145466 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145496 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145525 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145547 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145547 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145573 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145641 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145678 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145711 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.146547 master-0 kubenswrapper[4808]: I1203 13:54:52.145781 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.147395 master-0 kubenswrapper[4808]: I1203 13:54:52.145807 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.147395 master-0 kubenswrapper[4808]: E1203 13:54:52.145946 4808 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:52.147395 master-0 kubenswrapper[4808]: I1203 13:54:52.145970 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.147395 master-0 kubenswrapper[4808]: E1203 13:54:52.145989 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.645979175 +0000 UTC m=+242.086277110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:54:52.148844 master-0 kubenswrapper[4808]: I1203 13:54:52.148706 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.149063 master-0 kubenswrapper[4808]: I1203 13:54:52.148993 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.149180 master-0 kubenswrapper[4808]: I1203 13:54:52.149123 4808 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-n24qb"] Dec 03 13:54:52.149274 master-0 kubenswrapper[4808]: E1203 13:54:52.149228 4808 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:54:52.149347 master-0 kubenswrapper[4808]: E1203 13:54:52.149328 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.649303761 +0000 UTC m=+242.089601906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:54:52.149772 master-0 kubenswrapper[4808]: I1203 13:54:52.149727 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.149830 master-0 kubenswrapper[4808]: I1203 13:54:52.149801 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.151940 master-0 kubenswrapper[4808]: I1203 13:54:52.150500 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.151940 master-0 kubenswrapper[4808]: I1203 13:54:52.150525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.151940 master-0 kubenswrapper[4808]: I1203 13:54:52.150550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.151940 master-0 kubenswrapper[4808]: I1203 13:54:52.150938 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.152171 master-0 kubenswrapper[4808]: I1203 13:54:52.145644 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.152420 master-0 kubenswrapper[4808]: E1203 13:54:52.152378 4808 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:52.152492 master-0 kubenswrapper[4808]: E1203 13:54:52.152484 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.652447093 +0000 UTC m=+242.092745028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:52.153037 master-0 kubenswrapper[4808]: I1203 13:54:52.152967 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.153194 master-0 kubenswrapper[4808]: I1203 13:54:52.153137 4808 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 13:54:52.153248 master-0 kubenswrapper[4808]: E1203 13:54:52.153219 4808 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:54:52.153398 master-0 kubenswrapper[4808]: E1203 13:54:52.153327 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.653294728 +0000 UTC m=+242.093592663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:54:52.154942 master-0 kubenswrapper[4808]: I1203 13:54:52.154879 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.155418 master-0 kubenswrapper[4808]: I1203 13:54:52.155374 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.155505 master-0 kubenswrapper[4808]: I1203 13:54:52.155439 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.155581 master-0 kubenswrapper[4808]: I1203 13:54:52.155550 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.155984 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.156044 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.156298 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.156513 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.157315 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.157478 master-0 kubenswrapper[4808]: I1203 13:54:52.157448 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.157759 master-0 kubenswrapper[4808]: I1203 13:54:52.157688 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.158916 master-0 kubenswrapper[4808]: I1203 13:54:52.158819 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.159276 master-0 kubenswrapper[4808]: I1203 13:54:52.159204 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.163614 master-0 kubenswrapper[4808]: I1203 13:54:52.159842 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.163614 master-0 kubenswrapper[4808]: I1203 13:54:52.163306 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.178714 master-0 kubenswrapper[4808]: I1203 13:54:52.177818 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:52.182361 master-0 kubenswrapper[4808]: I1203 13:54:52.182251 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.183806 master-0 kubenswrapper[4808]: I1203 13:54:52.183743 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.184461 master-0 kubenswrapper[4808]: I1203 13:54:52.184387 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.189747 master-0 kubenswrapper[4808]: I1203 13:54:52.189678 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.190159 master-0 kubenswrapper[4808]: I1203 13:54:52.190122 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.191273 master-0 kubenswrapper[4808]: I1203 13:54:52.191152 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.197934 master-0 kubenswrapper[4808]: I1203 13:54:52.196554 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.197934 master-0 kubenswrapper[4808]: I1203 13:54:52.196684 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.197934 master-0 kubenswrapper[4808]: I1203 13:54:52.196783 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.197934 master-0 kubenswrapper[4808]: I1203 13:54:52.197378 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:54:52.199826 master-0 kubenswrapper[4808]: I1203 13:54:52.199337 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.201320 master-0 kubenswrapper[4808]: I1203 13:54:52.201245 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.201590 master-0 kubenswrapper[4808]: I1203 13:54:52.201525 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.202349 master-0 kubenswrapper[4808]: I1203 13:54:52.201977 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.202349 master-0 kubenswrapper[4808]: I1203 13:54:52.202252 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.202997 master-0 kubenswrapper[4808]: I1203 13:54:52.202951 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.203700 master-0 kubenswrapper[4808]: I1203 13:54:52.203648 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.205945 master-0 kubenswrapper[4808]: I1203 13:54:52.205851 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.227588 master-0 kubenswrapper[4808]: I1203 13:54:52.227475 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:54:52.236217 master-0 kubenswrapper[4808]: I1203 13:54:52.236152 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:54:52.247515 master-0 kubenswrapper[4808]: I1203 13:54:52.247341 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.247515 master-0 kubenswrapper[4808]: I1203 13:54:52.247437 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.247515 master-0 kubenswrapper[4808]: I1203 13:54:52.247478 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.247515 master-0 kubenswrapper[4808]: I1203 13:54:52.247533 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.247515 master-0 kubenswrapper[4808]: I1203 13:54:52.247553 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.247968 master-0 kubenswrapper[4808]: I1203 13:54:52.247577 4808 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.247968 master-0 kubenswrapper[4808]: E1203 13:54:52.247571 4808 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:54:52.247968 master-0 kubenswrapper[4808]: I1203 13:54:52.247616 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.247968 master-0 kubenswrapper[4808]: I1203 13:54:52.247650 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.247968 master-0 kubenswrapper[4808]: E1203 13:54:52.247772 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:54:52.747746736 +0000 UTC m=+242.188044671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:54:52.248534 master-0 kubenswrapper[4808]: I1203 13:54:52.248376 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:54:52.248844 master-0 kubenswrapper[4808]: I1203 13:54:52.248544 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.252713 master-0 kubenswrapper[4808]: I1203 13:54:52.252630 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.271055 master-0 kubenswrapper[4808]: I1203 13:54:52.270528 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:54:52.271788 master-0 kubenswrapper[4808]: I1203 13:54:52.271746 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96"] Dec 03 13:54:52.274005 master-0 kubenswrapper[4808]: I1203 13:54:52.273958 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb"] Dec 03 13:54:52.275817 master-0 kubenswrapper[4808]: I1203 13:54:52.275737 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9"] Dec 03 13:54:52.282893 master-0 kubenswrapper[4808]: I1203 13:54:52.282818 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5"] Dec 03 13:54:52.284470 master-0 kubenswrapper[4808]: I1203 13:54:52.283995 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.284470 master-0 kubenswrapper[4808]: I1203 13:54:52.284172 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p"] Dec 03 13:54:52.288081 master-0 kubenswrapper[4808]: I1203 13:54:52.287984 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz"] Dec 03 13:54:52.291017 master-0 kubenswrapper[4808]: I1203 13:54:52.290913 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8"] Dec 03 13:54:52.291017 master-0 kubenswrapper[4808]: I1203 13:54:52.290975 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5"] Dec 03 13:54:52.292108 master-0 kubenswrapper[4808]: I1203 13:54:52.292032 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.292227 master-0 kubenswrapper[4808]: I1203 13:54:52.292195 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:54:52.301142 master-0 kubenswrapper[4808]: I1203 13:54:52.300855 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:54:52.309509 master-0 kubenswrapper[4808]: I1203 13:54:52.309418 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:54:52.328320 master-0 kubenswrapper[4808]: I1203 13:54:52.328215 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:54:52.355357 master-0 kubenswrapper[4808]: I1203 13:54:52.355101 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.355357 master-0 kubenswrapper[4808]: I1203 13:54:52.355334 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.355695 master-0 kubenswrapper[4808]: I1203 13:54:52.355401 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.356526 master-0 kubenswrapper[4808]: I1203 13:54:52.355784 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.356526 master-0 kubenswrapper[4808]: I1203 13:54:52.356428 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.360878 master-0 kubenswrapper[4808]: I1203 13:54:52.358917 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:54:52.372787 master-0 kubenswrapper[4808]: I1203 13:54:52.372719 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:54:52.390695 master-0 kubenswrapper[4808]: I1203 13:54:52.390619 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:54:52.536852 master-0 kubenswrapper[4808]: I1203 13:54:52.536737 4808 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658176 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658248 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658317 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658347 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658396 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658473 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: I1203 13:54:52.658505 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658669 4808 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658723 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.658704762 +0000 UTC m=+243.099002687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658866 4808 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658911 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.658897407 +0000 UTC m=+243.099195342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658968 4808 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.658988 4808 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.659058 4808 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.659136 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:52.661204 master-0 kubenswrapper[4808]: E1203 13:54:52.659160 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:54:52.661917 master-0 kubenswrapper[4808]: E1203 13:54:52.659004 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.65899023 +0000 UTC m=+243.099288165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:54:52.661917 master-0 kubenswrapper[4808]: E1203 13:54:52.659375 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.659301679 +0000 UTC m=+243.099599614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:54:52.661917 master-0 kubenswrapper[4808]: E1203 13:54:52.659436 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.659423393 +0000 UTC m=+243.099721508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:52.661917 master-0 kubenswrapper[4808]: E1203 13:54:52.659462 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.659450803 +0000 UTC m=+243.099748848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:54:52.661917 master-0 kubenswrapper[4808]: E1203 13:54:52.660448 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.66035136 +0000 UTC m=+243.100649315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:52.662451 master-0 kubenswrapper[4808]: E1203 13:54:52.662336 4808 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:52.662630 master-0 kubenswrapper[4808]: E1203 13:54:52.662594 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.662559634 +0000 UTC m=+243.102857569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:54:52.759823 master-0 kubenswrapper[4808]: E1203 13:54:52.759703 4808 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:54:52.760217 master-0 kubenswrapper[4808]: E1203 13:54:52.759875 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:54:53.759841184 +0000 UTC m=+243.200139189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:54:52.760217 master-0 kubenswrapper[4808]: I1203 13:54:52.759426 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:52.826646 master-0 kubenswrapper[4808]: I1203 13:54:52.826541 4808 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:54:53.036794 master-0 kubenswrapper[4808]: I1203 13:54:53.036617 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm"] Dec 03 13:54:53.040330 master-0 kubenswrapper[4808]: I1203 13:54:53.040088 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8"] Dec 03 13:54:53.041452 master-0 kubenswrapper[4808]: I1203 13:54:53.041349 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5"] Dec 03 13:54:53.044407 master-0 kubenswrapper[4808]: I1203 13:54:53.043780 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-7978bf889c-n64v4"] Dec 03 13:54:53.044407 master-0 kubenswrapper[4808]: W1203 13:54:53.044388 4808 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc180b512_bf0c_4ddc_a5cf_f04acc830a61.slice/crio-4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c WatchSource:0}: Error finding container 4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c: Status 404 returned error can't find the container with id 4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c Dec 03 13:54:53.047745 master-0 kubenswrapper[4808]: I1203 13:54:53.047128 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498"} Dec 03 13:54:53.070670 master-0 kubenswrapper[4808]: I1203 13:54:53.070643 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96"] Dec 03 13:54:53.086533 master-0 kubenswrapper[4808]: I1203 13:54:53.081509 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74"] Dec 03 13:54:53.086533 master-0 kubenswrapper[4808]: I1203 13:54:53.081608 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl"] Dec 03 13:54:53.086533 master-0 kubenswrapper[4808]: I1203 13:54:53.085833 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz"] Dec 03 13:54:53.089988 master-0 kubenswrapper[4808]: I1203 13:54:53.089903 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p"] Dec 03 13:54:53.104764 master-0 kubenswrapper[4808]: I1203 13:54:53.101862 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9"] Dec 03 13:54:53.104764 master-0 kubenswrapper[4808]: I1203 13:54:53.102848 4808 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn"] Dec 03 13:54:53.671809 master-0 kubenswrapper[4808]: I1203 13:54:53.671198 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:53.671809 master-0 kubenswrapper[4808]: I1203 13:54:53.671818 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.671562 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.671860 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.671979 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.671941779 +0000 UTC m=+245.112239884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672075 4808 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.672118 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672016 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672154 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672136165 +0000 UTC m=+245.112434100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672193 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672177396 +0000 UTC m=+245.112475331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.672207 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.672236 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.672293 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672208 4808 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: I1203 13:54:53.672325 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672382 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672353741 +0000 UTC m=+245.112651876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672293 4808 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:54:53.673044 master-0 kubenswrapper[4808]: E1203 13:54:53.672408 4808 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672428 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672419173 +0000 UTC m=+245.112717108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672450 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672438924 +0000 UTC m=+245.112737139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672460 4808 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672484 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672474235 +0000 UTC m=+245.112772290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672518 4808 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:53.673720 master-0 kubenswrapper[4808]: E1203 13:54:53.672574 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.672562407 +0000 UTC m=+245.112860452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:53.773535 master-0 kubenswrapper[4808]: I1203 13:54:53.773444 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:53.773815 master-0 kubenswrapper[4808]: E1203 13:54:53.773697 4808 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:54:53.773856 master-0 kubenswrapper[4808]: E1203 13:54:53.773842 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:54:55.773813733 +0000 UTC m=+245.214111668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:54:54.055370 master-0 kubenswrapper[4808]: I1203 13:54:54.055308 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361"} Dec 03 13:54:54.057184 master-0 kubenswrapper[4808]: I1203 13:54:54.057139 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592"} Dec 03 13:54:54.059280 master-0 kubenswrapper[4808]: I1203 13:54:54.059222 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"27c1a40f3c3bc0e48435031dbfc32e5c0ade7b6afed6f0f6f463c37953bf90b2"} Dec 03 13:54:54.059372 master-0 kubenswrapper[4808]: I1203 13:54:54.059305 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44"} Dec 03 13:54:54.061411 master-0 kubenswrapper[4808]: I1203 13:54:54.061358 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a"} Dec 03 13:54:54.072039 master-0 kubenswrapper[4808]: I1203 13:54:54.071947 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e"} Dec 03 13:54:54.074619 master-0 kubenswrapper[4808]: I1203 13:54:54.074572 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53"} Dec 03 13:54:54.076158 master-0 kubenswrapper[4808]: I1203 13:54:54.076057 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55"} Dec 03 13:54:54.077222 master-0 kubenswrapper[4808]: I1203 13:54:54.077153 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e"} Dec 03 13:54:54.078901 master-0 kubenswrapper[4808]: I1203 13:54:54.078843 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942"} Dec 03 13:54:54.080413 master-0 kubenswrapper[4808]: I1203 13:54:54.080311 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803"} Dec 03 13:54:54.082026 master-0 kubenswrapper[4808]: I1203 13:54:54.081945 4808 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c"} Dec 03 13:54:54.476980 master-0 kubenswrapper[4808]: I1203 13:54:54.476152 4808 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podStartSLOduration=190.476074704 podStartE2EDuration="3m10.476074704s" podCreationTimestamp="2025-12-03 13:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:54:54.470656807 +0000 UTC m=+243.910954752" watchObservedRunningTime="2025-12-03 13:54:54.476074704 +0000 UTC m=+243.916372649" Dec 03 13:54:55.699601 master-0 kubenswrapper[4808]: I1203 13:54:55.699484 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:55.699601 master-0 kubenswrapper[4808]: I1203 13:54:55.699593 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: I1203 13:54:55.699632 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: I1203 13:54:55.699666 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: I1203 13:54:55.699710 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: I1203 13:54:55.699740 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.699738 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.699865 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.699838418 +0000 UTC m=+249.140136403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.699908 4808 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.699984 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.699959641 +0000 UTC m=+249.140257646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700044 4808 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700079 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700068504 +0000 UTC m=+249.140366529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700133 4808 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700163 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700153947 +0000 UTC m=+249.140452002 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700216 4808 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:54:55.700286 master-0 kubenswrapper[4808]: E1203 13:54:55.700244 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700235049 +0000 UTC m=+249.140533094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700320 4808 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700351 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700342152 +0000 UTC m=+249.140640157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700401 4808 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700430 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700421755 +0000 UTC m=+249.140719780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: I1203 13:54:55.699779 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: I1203 13:54:55.700469 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700550 4808 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:54:55.700789 master-0 kubenswrapper[4808]: E1203 13:54:55.700580 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.700571659 +0000 UTC m=+249.140869664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:54:55.801540 master-0 kubenswrapper[4808]: I1203 13:54:55.801197 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:55.801540 master-0 kubenswrapper[4808]: E1203 13:54:55.801432 4808 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:54:55.801540 master-0 kubenswrapper[4808]: E1203 13:54:55.801507 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:54:59.801490045 +0000 UTC m=+249.241787980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:54:56.914240 master-0 kubenswrapper[4808]: I1203 13:54:56.914128 4808 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:56.916232 master-0 kubenswrapper[4808]: I1203 13:54:56.916050 4808 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 13:54:56.925356 master-0 kubenswrapper[4808]: E1203 13:54:56.925238 4808 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:54:56.925356 master-0 kubenswrapper[4808]: E1203 13:54:56.925324 4808 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:56:58.92530311 +0000 UTC m=+368.365601045 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:54:58.697290 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 13:54:58.713026 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 13:54:58.713462 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 13:54:58.717656 master-0 systemd[1]: kubelet.service: Consumed 18.394s CPU time. Dec 03 13:54:58.739461 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 13:54:58.858925 master-0 kubenswrapper[8988]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: I1203 13:54:58.859017 8988 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861352 8988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861363 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861368 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861372 8988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861377 8988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861381 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861386 8988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861391 8988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861400 8988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861407 8988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861412 8988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861418 8988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861423 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861428 8988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861432 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861437 8988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861441 8988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:54:58.861685 master-0 kubenswrapper[8988]: W1203 13:54:58.861445 8988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861449 8988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861454 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861458 8988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861473 8988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861479 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861483 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861486 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861490 8988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861494 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861497 8988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861501 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861505 8988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861508 8988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861513 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861517 8988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861520 8988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861524 8988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861528 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861531 8988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:54:58.862388 master-0 kubenswrapper[8988]: W1203 13:54:58.861535 8988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861541 8988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861547 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861555 8988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861563 8988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861567 8988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861572 8988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861576 8988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861581 8988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861586 8988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861590 8988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861595 8988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861600 8988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861604 8988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861608 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861613 8988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861617 8988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861622 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861628 8988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:54:58.862894 master-0 kubenswrapper[8988]: W1203 13:54:58.861633 8988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861641 8988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861645 8988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861650 8988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861655 8988 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861658 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861664 8988 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861669 8988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861673 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861676 8988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861680 8988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861684 8988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861689 8988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861694 8988 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861699 8988 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: W1203 13:54:58.861704 8988 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861825 8988 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861841 8988 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861852 8988 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861861 8988 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861872 8988 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 13:54:58.863470 master-0 kubenswrapper[8988]: I1203 13:54:58.861877 8988 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861884 8988 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861890 8988 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861896 8988 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861901 8988 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861906 8988 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861912 8988 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861917 8988 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861922 8988 flags.go:64] FLAG: --cgroup-root="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861927 8988 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861931 8988 flags.go:64] FLAG: --client-ca-file="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861935 8988 flags.go:64] FLAG: --cloud-config="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861939 8988 flags.go:64] FLAG: --cloud-provider="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861944 8988 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861949 8988 flags.go:64] FLAG: --cluster-domain="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861953 8988 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861958 8988 flags.go:64] FLAG: --config-dir="" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861962 8988 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861967 8988 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861974 8988 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861978 8988 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861983 8988 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861988 8988 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861992 8988 flags.go:64] FLAG: --contention-profiling="false" Dec 03 13:54:58.864077 master-0 kubenswrapper[8988]: I1203 13:54:58.861996 8988 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862000 8988 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862005 8988 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862009 8988 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862015 8988 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862020 8988 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862024 8988 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862028 8988 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862032 8988 flags.go:64] FLAG: --enable-server="true" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862036 8988 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862041 8988 flags.go:64] FLAG: --event-burst="100" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862046 8988 flags.go:64] FLAG: --event-qps="50" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862051 8988 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862055 8988 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862059 8988 flags.go:64] FLAG: --eviction-hard="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862065 8988 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862069 8988 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862073 8988 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862078 8988 flags.go:64] FLAG: --eviction-soft="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862082 8988 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862086 8988 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862090 8988 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862095 8988 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862099 8988 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862103 8988 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 13:54:58.864708 master-0 kubenswrapper[8988]: I1203 13:54:58.862107 8988 flags.go:64] FLAG: --feature-gates="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862113 8988 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862117 8988 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862122 8988 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862126 8988 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862131 8988 flags.go:64] FLAG: --healthz-port="10248" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862135 8988 flags.go:64] FLAG: --help="false" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862139 8988 flags.go:64] FLAG: --hostname-override="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862143 8988 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862147 8988 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862152 8988 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862156 8988 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862160 8988 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862165 8988 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862169 8988 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862173 8988 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862177 8988 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862181 8988 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862185 8988 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862189 8988 flags.go:64] FLAG: --kube-reserved="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862193 8988 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862197 8988 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862202 8988 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862206 8988 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862210 8988 flags.go:64] FLAG: --lock-file="" Dec 03 13:54:58.865416 master-0 kubenswrapper[8988]: I1203 13:54:58.862215 8988 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862219 8988 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862224 8988 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862231 8988 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862235 8988 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862240 8988 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862244 8988 flags.go:64] FLAG: --logging-format="text" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862249 8988 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862281 8988 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862285 8988 flags.go:64] FLAG: --manifest-url="" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862290 8988 flags.go:64] FLAG: --manifest-url-header="" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862296 8988 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862300 8988 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862306 8988 flags.go:64] FLAG: --max-pods="110" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862311 8988 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862316 8988 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862320 8988 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862325 8988 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862329 8988 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862336 8988 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862340 8988 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862351 8988 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862355 8988 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862359 8988 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 13:54:58.866052 master-0 kubenswrapper[8988]: I1203 13:54:58.862364 8988 flags.go:64] FLAG: --pod-cidr="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862368 8988 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862375 8988 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862379 8988 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862384 8988 flags.go:64] FLAG: --pods-per-core="0" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862388 8988 flags.go:64] FLAG: --port="10250" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862392 8988 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862397 8988 flags.go:64] FLAG: --provider-id="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862402 8988 flags.go:64] FLAG: --qos-reserved="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862407 8988 flags.go:64] FLAG: --read-only-port="10255" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862412 8988 flags.go:64] FLAG: --register-node="true" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862417 8988 flags.go:64] FLAG: --register-schedulable="true" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862421 8988 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862429 8988 flags.go:64] FLAG: --registry-burst="10" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862433 8988 flags.go:64] FLAG: --registry-qps="5" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862437 8988 flags.go:64] FLAG: --reserved-cpus="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862441 8988 flags.go:64] FLAG: --reserved-memory="" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862449 8988 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862453 8988 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862458 8988 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862462 8988 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862466 8988 flags.go:64] FLAG: --runonce="false" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862470 8988 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862474 8988 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862479 8988 flags.go:64] FLAG: --seccomp-default="false" Dec 03 13:54:58.866830 master-0 kubenswrapper[8988]: I1203 13:54:58.862483 8988 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862488 8988 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862494 8988 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862498 8988 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862503 8988 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862507 8988 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862511 8988 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862515 8988 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862519 8988 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862523 8988 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862528 8988 flags.go:64] FLAG: --system-cgroups="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862532 8988 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862538 8988 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862542 8988 flags.go:64] FLAG: --tls-cert-file="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862546 8988 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862551 8988 flags.go:64] FLAG: --tls-min-version="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862555 8988 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862559 8988 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862563 8988 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862568 8988 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862572 8988 flags.go:64] FLAG: --v="2" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862579 8988 flags.go:64] FLAG: --version="false" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862584 8988 flags.go:64] FLAG: --vmodule="" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862589 8988 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 13:54:58.867585 master-0 kubenswrapper[8988]: I1203 13:54:58.862596 8988 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862690 8988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862697 8988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862701 8988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862706 8988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862709 8988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862713 8988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862717 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862721 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862726 8988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862731 8988 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862735 8988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862739 8988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862743 8988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862748 8988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862751 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862755 8988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862759 8988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862762 8988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:54:58.868201 master-0 kubenswrapper[8988]: W1203 13:54:58.862766 8988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862770 8988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862773 8988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862777 8988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862780 8988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862784 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862788 8988 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862792 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862796 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862800 8988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862803 8988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862807 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862810 8988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862816 8988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862821 8988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862824 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862828 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862832 8988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862835 8988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862839 8988 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:54:58.868696 master-0 kubenswrapper[8988]: W1203 13:54:58.862843 8988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862849 8988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862853 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862857 8988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862861 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862866 8988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862870 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862874 8988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862878 8988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862882 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862888 8988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862892 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862896 8988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862900 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862905 8988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862909 8988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862913 8988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862917 8988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862921 8988 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:54:58.869205 master-0 kubenswrapper[8988]: W1203 13:54:58.862926 8988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862931 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862935 8988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862940 8988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862944 8988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862949 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862955 8988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862959 8988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862963 8988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862967 8988 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862971 8988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862975 8988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862980 8988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862984 8988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: W1203 13:54:58.862988 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:54:58.869726 master-0 kubenswrapper[8988]: I1203 13:54:58.863009 8988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:54:58.874407 master-0 kubenswrapper[8988]: I1203 13:54:58.873776 8988 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 13:54:58.874407 master-0 kubenswrapper[8988]: I1203 13:54:58.874398 8988 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 13:54:58.874618 master-0 kubenswrapper[8988]: W1203 13:54:58.874595 8988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:54:58.874618 master-0 kubenswrapper[8988]: W1203 13:54:58.874611 8988 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:54:58.874618 master-0 kubenswrapper[8988]: W1203 13:54:58.874617 8988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874623 8988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874631 8988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874636 8988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874640 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874645 8988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874649 8988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874654 8988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874659 8988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874663 8988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874667 8988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874671 8988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874675 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874681 8988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874685 8988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874689 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874693 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874697 8988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874701 8988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874705 8988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:54:58.874733 master-0 kubenswrapper[8988]: W1203 13:54:58.874709 8988 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874715 8988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874719 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874725 8988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874731 8988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874736 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874741 8988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874746 8988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874751 8988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874756 8988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874761 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874766 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874772 8988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874777 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874782 8988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874787 8988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874792 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874798 8988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874804 8988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:54:58.875370 master-0 kubenswrapper[8988]: W1203 13:54:58.874810 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874816 8988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874822 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874829 8988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874838 8988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874845 8988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874850 8988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874855 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874861 8988 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874867 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874872 8988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874878 8988 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874882 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874887 8988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874891 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874897 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874901 8988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874982 8988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874987 8988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874992 8988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:54:58.875890 master-0 kubenswrapper[8988]: W1203 13:54:58.874996 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875000 8988 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875004 8988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875008 8988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875015 8988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875020 8988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875024 8988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875029 8988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875034 8988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875039 8988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875044 8988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: I1203 13:54:58.875053 8988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875303 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875322 8988 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:54:58.876438 master-0 kubenswrapper[8988]: W1203 13:54:58.875331 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875337 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875342 8988 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875347 8988 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875351 8988 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875356 8988 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875361 8988 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875366 8988 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875370 8988 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875375 8988 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875380 8988 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875386 8988 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875391 8988 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875397 8988 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875405 8988 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875411 8988 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875416 8988 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875420 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875424 8988 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:54:58.876787 master-0 kubenswrapper[8988]: W1203 13:54:58.875428 8988 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875432 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875436 8988 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875440 8988 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875444 8988 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875448 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875452 8988 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875456 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875460 8988 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875464 8988 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875468 8988 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875472 8988 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875477 8988 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875482 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875486 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875490 8988 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875495 8988 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875502 8988 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875506 8988 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875510 8988 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:54:58.877281 master-0 kubenswrapper[8988]: W1203 13:54:58.875514 8988 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875519 8988 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875523 8988 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875527 8988 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875531 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875535 8988 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875540 8988 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875563 8988 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875625 8988 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875631 8988 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875641 8988 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875645 8988 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875653 8988 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875657 8988 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875661 8988 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875666 8988 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875669 8988 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875673 8988 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875677 8988 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875681 8988 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:54:58.877800 master-0 kubenswrapper[8988]: W1203 13:54:58.875684 8988 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875689 8988 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875692 8988 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875696 8988 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875700 8988 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875704 8988 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875708 8988 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875712 8988 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875716 8988 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875720 8988 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: W1203 13:54:58.875723 8988 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: I1203 13:54:58.875731 8988 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:54:58.878349 master-0 kubenswrapper[8988]: I1203 13:54:58.875978 8988 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 13:54:58.878698 master-0 kubenswrapper[8988]: I1203 13:54:58.878553 8988 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 13:54:58.878800 master-0 kubenswrapper[8988]: I1203 13:54:58.878695 8988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 13:54:58.879104 master-0 kubenswrapper[8988]: I1203 13:54:58.879072 8988 server.go:997] "Starting client certificate rotation" Dec 03 13:54:58.879104 master-0 kubenswrapper[8988]: I1203 13:54:58.879102 8988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 13:54:58.879566 master-0 kubenswrapper[8988]: I1203 13:54:58.879407 8988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 09:13:32.560291686 +0000 UTC Dec 03 13:54:58.879566 master-0 kubenswrapper[8988]: I1203 13:54:58.879549 8988 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h18m33.680746321s for next certificate rotation Dec 03 13:54:58.880309 master-0 kubenswrapper[8988]: I1203 13:54:58.880277 8988 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:54:58.882208 master-0 kubenswrapper[8988]: I1203 13:54:58.882160 8988 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:54:58.886101 master-0 kubenswrapper[8988]: I1203 13:54:58.886071 8988 log.go:25] "Validated CRI v1 runtime API" Dec 03 13:54:58.888926 master-0 kubenswrapper[8988]: I1203 13:54:58.888901 8988 log.go:25] "Validated CRI v1 image API" Dec 03 13:54:58.890720 master-0 kubenswrapper[8988]: I1203 13:54:58.890693 8988 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 13:54:58.895631 master-0 kubenswrapper[8988]: I1203 13:54:58.895571 8988 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 13:54:58.896107 master-0 kubenswrapper[8988]: I1203 13:54:58.895792 8988 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm major:0 minor:332 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm major:0 minor:122 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm major:0 minor:327 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm major:0 minor:320 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm major:0 minor:160 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm major:0 minor:336 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm major:0 minor:161 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm major:0 minor:149 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm major:0 minor:189 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm major:0 minor:345 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm major:0 minor:330 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:307 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv:{mountpoint:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27:{mountpoint:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27 major:0 minor:300 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z major:0 minor:316 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client major:0 minor:292 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert major:0 minor:299 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87:{mountpoint:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87 major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:296 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63aae3b9-9a72-497e-af01-5d8b8d0ac876/volumes/kubernetes.io~projected/kube-api-access-zbcrq:{mountpoint:/var/lib/kubelet/pods/63aae3b9-9a72-497e-af01-5d8b8d0ac876/volumes/kubernetes.io~projected/kube-api-access-zbcrq major:0 minor:305 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:344 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8:{mountpoint:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8 major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb:{mountpoint:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd:{mountpoint:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd major:0 minor:306 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq:{mountpoint:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq major:0 minor:310 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access major:0 minor:304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert major:0 minor:297 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr major:0 minor:317 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8:{mountpoint:/var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8 major:0 minor:309 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ce26e464-9a7c-4b22-a2b4-03706b351455/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ce26e464-9a7c-4b22-a2b4-03706b351455/volumes/kubernetes.io~projected/kube-api-access major:0 minor:93 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:121 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd:{mountpoint:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd major:0 minor:326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert major:0 minor:323 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659:{mountpoint:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659 major:0 minor:328 fsType:tmpfs blockSize:0} overlay_0-100:{mountpoint:/var/lib/containers/storage/overlay/a3db11f1d197403bc7a601e369088c65e6fd1a2dcec9e00dd658e8c370a7df70/merged major:0 minor:100 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/b28094ecde33e074ba42b216bc25eca0518ff8bb544951653a2f56ad9f53ff0f/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/03d950f4ff72d8d2dd012b88e2ae954f879ae21a92e47e62b61ea9cd76d17dd4/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/ef5bd58eb0ad358ae1e1366823e52735a9c2da31c6523b4d529a64003363c6ea/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/3a22c4cdf61b84fec2cb9e5a85bc350ab2097e04f6793ecaca65d3228b287a35/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/f4a9d268cf636cb09fd9194375777d2be4ab675ec6be26ceeda50ae7e31db2d6/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/4c1a93970a27e6ff2b780eba56d77c18d0e24b8d3167450fe1cc9e9906ffe78c/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/1d9a07802e797c42a92202567e6776d286f2bf52a2a3cc7a6341e1f1c29eb632/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/c7736f2dd9e62e9b9b7deed044c391c33db2fe9f42042828ec3b149b8f0dbcd2/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/120da911464c1fb225cb608d76b4a82916c33cac96a69e3e9a86bc3157fb10e9/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/7e22b4931d2b3d6acab6b8cdf0ee4cc2af80fe5e0a51feb9bcfc4f8ec9a08526/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/9236aa71b7b38f4bff4c386541400712ff4a86c5ee2aad3a59abbeb721709243/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/8a8321cb932b4a522658232dedf3acbbc58acfb292a12e866005fbd79c81d391/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/6f8d0ff5f4e7c5e5a5dcf8f000a18273128e6517cdecfdfb8a2f6075acbe4fd2/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/c660ac3557d781d61a3c882878e54a8cf86a2746440d0988edf943c2ddfa9a65/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/8236d22cbdd2919407b45e0d0568eda372d4f537bfa3ef9cacd9cab7c47a50dd/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/f2eb61001221c480748a3495da35b68803f47bcce4cb08d0100367e6b89aebb0/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-191:{mountpoint:/var/lib/containers/storage/overlay/f70fdcf95efa88a65471e05e70e186a76e212181a7df83317a1f36ede95aa12c/merged major:0 minor:191 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/4bcdf6360724da0671b5a37769450c665e1215c51d7cc9e81fc8fb454dab693a/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/643ac61b6e9c7dee37c5b170372727dd86a2daa5bdb22c81f453640482a5ea1d/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/ef68c7290f7af07741cdcb315e6a4485e7b1f5f555c56ef741e8925950486f6c/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-211:{mountpoint:/var/lib/containers/storage/overlay/c3de0e27d9cef0462673f0b35d281abf75047258bbd17804b232b6ca877e1e91/merged major:0 minor:211 fsType:overlay blockSize:0} overlay_0-216:{mountpoint:/var/lib/containers/storage/overlay/7c9f94dc7fbbd3a44213f7ea29328ce15650e2d2b3389013c4df3ac6a4e009fb/merged major:0 minor:216 fsType:overlay blockSize:0} overlay_0-224:{mountpoint:/var/lib/containers/storage/overlay/c85acc172b910a6d6388b23270b22ea8747398e06f1eacf1f277c299684b21c7/merged major:0 minor:224 fsType:overlay blockSize:0} overlay_0-232:{mountpoint:/var/lib/containers/storage/overlay/d62aa754c6b5995786c31c420c63322e01df029e299f577c6813b361ce23f13a/merged major:0 minor:232 fsType:overlay blockSize:0} overlay_0-234:{mountpoint:/var/lib/containers/storage/overlay/c397723290e3f49a0389eb76d4d69586e59ffa0aae5e3e92046abbf031d609f1/merged major:0 minor:234 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/9ebaf5d8e34f860c47e435c286cd18c600d76ca1649e058acae48c861175d8ed/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-249:{mountpoint:/var/lib/containers/storage/overlay/a32d0547d1784c391f2ad87dbac7aa003077a60e0efadabbc352e402b75d6176/merged major:0 minor:249 fsType:overlay blockSize:0} overlay_0-257:{mountpoint:/var/lib/containers/storage/overlay/7f8375596b2ab5f559ff224c5046dc2687020aaf5c6f3c99995f0359e52c3dcf/merged major:0 minor:257 fsType:overlay blockSize:0} overlay_0-259:{mountpoint:/var/lib/containers/storage/overlay/b0b62acafc8b64d276c85ddd41e6da46ad3ea7be162f750e59750002ed3796b1/merged major:0 minor:259 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/55ef4d09ca72458597f372804dedc522cb9d0db75675cba15e22f5bcedfff899/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/6034f939618f6eba815ba5c8463be85faf0a54eb7ece5e1d019569944d2c9985/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/9ff415c0e21f491c3f6dc00d577cca2e55b098dd69c0bde3bc298a22ced5b5c2/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/82fb5f275cf8014b40d9050cd51b807daac3a9ffb1901c48a30c518eeb697709/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/b4eb71cc5d480b52f02ce8d4197ca15855c3b671d76b58780683f8b0e66eec7f/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/46c57282d826f47312deeeffd36ab3b27aac2891e427965b24918c24456e25e6/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/7e3d993573166c6dfeaeda34d2ec326f42debd978fd83b566228ce108e305175/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/6841c6bdfd7afaf50c2fc1c3ed441a283b2c85e9e3f02f53019e303f0b79dbb2/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/7883ab83040058df480bd758bd358a20363ef999571f5222f4e55070ead30f51/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/40bb7f8f3c7dde87fc169e9434774ff557ac766e814a0cbd17bf49aa4c3cb7d2/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/898c26409d33ce592b3bb2d8f8752f33926f0c411e85c4adb7a68bb03f8f913d/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-361:{mountpoint:/var/lib/containers/storage/overlay/7245b5d9224ee258d6812600ffc318255c4cc770d310e741279c59cf5a91944d/merged major:0 minor:361 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/2a850f303f79ef1e62f95c5433135f942b55006174eab5a2fa3379971fcbe6c8/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-365:{mountpoint:/var/lib/containers/storage/overlay/9aa63b7e4ee62240ee8449c093cab85360c155aeeaf18af63fcfd5c5802a6223/merged major:0 minor:365 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/e267232cfd0b4a028bfb248dbd044065a08e82e79956aa7fd4ed080927b49ef0/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/cd5abbc0efe2c9082f2ca9018b4b7f8233234882023bfa08ef7a15d144229541/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/54fbfb49adcf1d2bed25e32da3905b88f6786110aa42f9ef09e20c75d87902de/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/877515311b6a9ef9f525c12c010e0bbd8dbf9ca98e9c6c50e75f647c0aca1626/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/e416a29bc36976a3ee4bc38acc43d7c2b23579fa3130f2ec47a88c193c2d27f7/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/a72598a2554e6517aa5e970e9f3fa46391ff7e27e7162c6f65c0790af8034c31/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/a475d1516fc69d6181b704744e3eb6b63bc7d800f76c49e12703d607319dfb54/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/403d8f2aa7460c7ae1217817ec5d0f3432a8d6f9a3fcee3e3ed07e2095032f5d/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/416ff591624a01d34ec79f35d9d2b5c5975b90baae806dfb54958b0f78823457/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/ba732b4bd85147f0276fbbcfab158f0528bff3e57874d7143762df30bad7d7d6/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/5368ce72ead0ed7236c5672f1a3a398d022aa129825d1a54b54f3ebc822c6d35/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/00b750e785e9297b8dcc513da9b6ab22e3d7a28cb5958e1d67fa9aad700a7ac8/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/6d4f3de7b9b407a74f15f1e9852a8ef542300757d1822dc4c3f28bc6b457f759/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d089e7c37e00822be318c2bb74aaf2f77d7ecee10aef584e539a3b2526dd2041/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/5643b4ef1d7ffc5bfb435e80b693fbd5829bcc8471d9aa301345d7cebac37527/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/7c6738a8659019876dd5d911ebbac209609b565f7d80d25d8c393fd48d56d287/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/02f57c2dd052d032938e81d160756f004e2a7c1146db9fddda47ea77fa13c9a7/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/38c14967a0422ba064a8ac99cb5d5ba9063714a6487768173c7ce997870d33ce/merged major:0 minor:95 fsType:overlay blockSize:0}] Dec 03 13:54:58.924118 master-0 kubenswrapper[8988]: I1203 13:54:58.923550 8988 manager.go:217] Machine: {Timestamp:2025-12-03 13:54:58.922123604 +0000 UTC m=+0.110191907 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5051321c-b7a7-4bc8-b64a-b5b2f6df7e9d Filesystems:[{Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:156 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd DeviceMajor:0 DeviceMinor:306 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:293 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8 DeviceMajor:0 DeviceMinor:309 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-224 DeviceMajor:0 DeviceMinor:224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:292 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq DeviceMajor:0 DeviceMinor:310 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-234 DeviceMajor:0 DeviceMinor:234 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:153 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:323 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:314 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd DeviceMajor:0 DeviceMinor:326 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659 DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm DeviceMajor:0 DeviceMinor:336 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z DeviceMajor:0 DeviceMinor:316 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:290 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-191 DeviceMajor:0 DeviceMinor:191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-249 DeviceMajor:0 DeviceMinor:249 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-361 DeviceMajor:0 DeviceMinor:361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:294 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm DeviceMajor:0 DeviceMinor:327 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:94 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-216 DeviceMajor:0 DeviceMinor:216 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8 DeviceMajor:0 DeviceMinor:302 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm DeviceMajor:0 DeviceMinor:149 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-259 DeviceMajor:0 DeviceMinor:259 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:158 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-365 DeviceMajor:0 DeviceMinor:365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-100 DeviceMajor:0 DeviceMinor:100 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-257 DeviceMajor:0 DeviceMinor:257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:307 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:126 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv DeviceMajor:0 DeviceMinor:312 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm DeviceMajor:0 DeviceMinor:332 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:121 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm DeviceMajor:0 DeviceMinor:122 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:127 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:311 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm DeviceMajor:0 DeviceMinor:160 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-232 DeviceMajor:0 DeviceMinor:232 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:318 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm DeviceMajor:0 DeviceMinor:320 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm DeviceMajor:0 DeviceMinor:161 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87 DeviceMajor:0 DeviceMinor:301 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr DeviceMajor:0 DeviceMinor:317 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:297 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:304 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/63aae3b9-9a72-497e-af01-5d8b8d0ac876/volumes/kubernetes.io~projected/kube-api-access-zbcrq DeviceMajor:0 DeviceMinor:305 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:296 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm DeviceMajor:0 DeviceMinor:189 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:298 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27 DeviceMajor:0 DeviceMinor:300 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:313 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm DeviceMajor:0 DeviceMinor:345 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ce26e464-9a7c-4b22-a2b4-03706b351455/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:93 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:295 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:344 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:157 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-211 DeviceMajor:0 DeviceMinor:211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:308 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:315 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:299 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm DeviceMajor:0 DeviceMinor:330 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:159 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:12a33b618352d27 MacAddress:22:b2:f2:a3:0b:74 Speed:10000 Mtu:8900} {Name:4a83b648c669c68 MacAddress:2a:68:6b:12:bd:1d Speed:10000 Mtu:8900} {Name:4b9a6d5be513374 MacAddress:46:23:c0:1e:19:b6 Speed:10000 Mtu:8900} {Name:79a4ce4fa1bb86b MacAddress:ae:a0:8c:1a:fd:bd Speed:10000 Mtu:8900} {Name:8bff50a8699bca9 MacAddress:b2:b8:2f:2d:77:34 Speed:10000 Mtu:8900} {Name:938a08c4d1aea74 MacAddress:86:b3:e3:cf:40:73 Speed:10000 Mtu:8900} {Name:956e8e5ddc763af MacAddress:16:e6:6a:a8:63:29 Speed:10000 Mtu:8900} {Name:bf859b5a264e6e2 MacAddress:52:37:22:51:f8:a6 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:ca1230f4b492fd1 MacAddress:ba:73:d5:9c:6f:f2 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:f2641a7c5c46993 MacAddress:fe:1b:70:08:db:1e Speed:10000 Mtu:8900} {Name:fa6ec978459ecd0 MacAddress:86:67:b7:23:71:72 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:9e:f4:18:ab:cf:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 13:54:58.924118 master-0 kubenswrapper[8988]: I1203 13:54:58.924080 8988 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 13:54:58.924761 master-0 kubenswrapper[8988]: I1203 13:54:58.924306 8988 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 13:54:58.925364 master-0 kubenswrapper[8988]: I1203 13:54:58.925225 8988 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 13:54:58.925643 master-0 kubenswrapper[8988]: I1203 13:54:58.925575 8988 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 13:54:58.926007 master-0 kubenswrapper[8988]: I1203 13:54:58.925630 8988 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 13:54:58.926084 master-0 kubenswrapper[8988]: I1203 13:54:58.926070 8988 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 13:54:58.926084 master-0 kubenswrapper[8988]: I1203 13:54:58.926084 8988 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 13:54:58.926161 master-0 kubenswrapper[8988]: I1203 13:54:58.926104 8988 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:54:58.926212 master-0 kubenswrapper[8988]: I1203 13:54:58.926134 8988 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:54:58.926463 master-0 kubenswrapper[8988]: I1203 13:54:58.926424 8988 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:54:58.927242 master-0 kubenswrapper[8988]: I1203 13:54:58.927207 8988 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 13:54:58.927401 master-0 kubenswrapper[8988]: I1203 13:54:58.927350 8988 kubelet.go:418] "Attempting to sync node with API server" Dec 03 13:54:58.927401 master-0 kubenswrapper[8988]: I1203 13:54:58.927368 8988 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 13:54:58.927487 master-0 kubenswrapper[8988]: I1203 13:54:58.927423 8988 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 13:54:58.927487 master-0 kubenswrapper[8988]: I1203 13:54:58.927437 8988 kubelet.go:324] "Adding apiserver pod source" Dec 03 13:54:58.927487 master-0 kubenswrapper[8988]: I1203 13:54:58.927465 8988 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 13:54:58.929411 master-0 kubenswrapper[8988]: I1203 13:54:58.929332 8988 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 13:54:58.929930 master-0 kubenswrapper[8988]: I1203 13:54:58.929885 8988 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 13:54:58.930834 master-0 kubenswrapper[8988]: I1203 13:54:58.930791 8988 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 13:54:58.931103 master-0 kubenswrapper[8988]: I1203 13:54:58.931056 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 13:54:58.931168 master-0 kubenswrapper[8988]: I1203 13:54:58.931118 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 13:54:58.931168 master-0 kubenswrapper[8988]: I1203 13:54:58.931138 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 13:54:58.931168 master-0 kubenswrapper[8988]: I1203 13:54:58.931162 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 13:54:58.931357 master-0 kubenswrapper[8988]: I1203 13:54:58.931181 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 13:54:58.931357 master-0 kubenswrapper[8988]: I1203 13:54:58.931206 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 13:54:58.931357 master-0 kubenswrapper[8988]: I1203 13:54:58.931224 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 13:54:58.931357 master-0 kubenswrapper[8988]: I1203 13:54:58.931246 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 13:54:58.931548 master-0 kubenswrapper[8988]: I1203 13:54:58.931397 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 13:54:58.931548 master-0 kubenswrapper[8988]: I1203 13:54:58.931419 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 13:54:58.931548 master-0 kubenswrapper[8988]: I1203 13:54:58.931436 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 13:54:58.931548 master-0 kubenswrapper[8988]: I1203 13:54:58.931454 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 13:54:58.931548 master-0 kubenswrapper[8988]: I1203 13:54:58.931528 8988 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 13:54:58.933234 master-0 kubenswrapper[8988]: I1203 13:54:58.933197 8988 server.go:1280] "Started kubelet" Dec 03 13:54:58.933628 master-0 kubenswrapper[8988]: I1203 13:54:58.933568 8988 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 13:54:58.934384 master-0 kubenswrapper[8988]: I1203 13:54:58.934228 8988 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 13:54:58.934462 master-0 kubenswrapper[8988]: I1203 13:54:58.934437 8988 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 13:54:58.934698 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 13:54:58.935117 master-0 kubenswrapper[8988]: I1203 13:54:58.935083 8988 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 13:54:58.966049 master-0 kubenswrapper[8988]: I1203 13:54:58.935859 8988 server.go:449] "Adding debug handlers to kubelet server" Dec 03 13:54:58.973864 master-0 kubenswrapper[8988]: I1203 13:54:58.973751 8988 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 13:54:58.974329 master-0 kubenswrapper[8988]: I1203 13:54:58.974194 8988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 13:54:58.974396 master-0 kubenswrapper[8988]: I1203 13:54:58.974314 8988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 11:11:41.308131385 +0000 UTC Dec 03 13:54:58.974396 master-0 kubenswrapper[8988]: I1203 13:54:58.974354 8988 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h16m42.333780642s for next certificate rotation Dec 03 13:54:58.974396 master-0 kubenswrapper[8988]: I1203 13:54:58.974358 8988 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 13:54:58.974485 master-0 kubenswrapper[8988]: I1203 13:54:58.974449 8988 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 13:54:58.974485 master-0 kubenswrapper[8988]: I1203 13:54:58.974459 8988 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 13:54:58.975540 master-0 kubenswrapper[8988]: I1203 13:54:58.975449 8988 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 13:54:58.977730 master-0 kubenswrapper[8988]: I1203 13:54:58.977687 8988 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 13:54:58.977917 master-0 kubenswrapper[8988]: I1203 13:54:58.977844 8988 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982107 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982171 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982185 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982197 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982210 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 13:54:58.982196 master-0 kubenswrapper[8988]: I1203 13:54:58.982221 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982233 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982247 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982279 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982307 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982320 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982334 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982346 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982363 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982373 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982384 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982395 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982409 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982420 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982432 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982443 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982454 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982465 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982478 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982488 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982506 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982520 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982543 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982563 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982619 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982640 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982654 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982667 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982681 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce26e464-9a7c-4b22-a2b4-03706b351455" volumeName="kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982692 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982704 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982717 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982731 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982745 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.982706 master-0 kubenswrapper[8988]: I1203 13:54:58.982760 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982773 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982788 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982829 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982842 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982854 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982867 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982877 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982892 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982905 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982917 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982928 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982940 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982960 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982973 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982985 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.982997 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983010 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983023 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983222 8988 factory.go:55] Registering systemd factory Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983323 8988 factory.go:221] Registration of the systemd container factory successfully Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983215 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983401 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983419 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983451 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983465 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983479 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63aae3b9-9a72-497e-af01-5d8b8d0ac876" volumeName="kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983493 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983507 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983524 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983538 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983552 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983568 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983585 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983612 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983625 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983647 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983665 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983692 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce26e464-9a7c-4b22-a2b4-03706b351455" volumeName="kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983712 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983736 8988 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983751 8988 reconstruct.go:97] "Volume reconstruction finished" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983764 8988 reconciler.go:26] "Reconciler: start to sync state" Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.983697 8988 factory.go:153] Registering CRI-O factory Dec 03 13:54:58.984161 master-0 kubenswrapper[8988]: I1203 13:54:58.984214 8988 factory.go:221] Registration of the crio container factory successfully Dec 03 13:54:58.985635 master-0 kubenswrapper[8988]: I1203 13:54:58.984422 8988 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 13:54:58.985635 master-0 kubenswrapper[8988]: I1203 13:54:58.984465 8988 factory.go:103] Registering Raw factory Dec 03 13:54:58.985635 master-0 kubenswrapper[8988]: I1203 13:54:58.984483 8988 manager.go:1196] Started watching for new ooms in manager Dec 03 13:54:58.985635 master-0 kubenswrapper[8988]: I1203 13:54:58.985335 8988 manager.go:319] Starting recovery of all containers Dec 03 13:54:59.017528 master-0 kubenswrapper[8988]: I1203 13:54:59.016877 8988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 13:54:59.020282 master-0 kubenswrapper[8988]: I1203 13:54:59.020196 8988 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 13:54:59.029581 master-0 kubenswrapper[8988]: I1203 13:54:59.020307 8988 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 13:54:59.029581 master-0 kubenswrapper[8988]: I1203 13:54:59.020352 8988 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 13:54:59.029581 master-0 kubenswrapper[8988]: E1203 13:54:59.020430 8988 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 13:54:59.029581 master-0 kubenswrapper[8988]: I1203 13:54:59.024227 8988 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 13:54:59.029581 master-0 kubenswrapper[8988]: I1203 13:54:59.025185 8988 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060147 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a71ca428dfafcfdc36c094ec10a4f26a0955b62eee12c5643b197e7b67fda68a" exitCode=0 Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060217 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="f9b2a45b3882aa4aab7d621861c3b576125dca392eda394a42bdbf272c5861e2" exitCode=0 Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060225 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="26079c56109d1215373542cb7279aa79197f8d276b87f23f84c5d431dd38bc3f" exitCode=0 Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060232 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="54eb7436f6ac8799b7f10cde49a492e33995d42df0890008db66fbf955cc9e20" exitCode=0 Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060238 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a6cef233e6c629ac6fba57da009a22816a29742255beeb15a48e7b7b48c9e536" exitCode=0 Dec 03 13:54:59.062631 master-0 kubenswrapper[8988]: I1203 13:54:59.060245 8988 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="8508b9103a62149e40a9f8b253309fee2580cb816ac86bfe2d7376f7c71e976c" exitCode=0 Dec 03 13:54:59.063118 master-0 kubenswrapper[8988]: I1203 13:54:59.062675 8988 generic.go:334] "Generic (PLEG): container finished" podID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerID="c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541" exitCode=0 Dec 03 13:54:59.066914 master-0 kubenswrapper[8988]: I1203 13:54:59.066829 8988 generic.go:334] "Generic (PLEG): container finished" podID="9afa5e14-6832-4650-9401-97359c445e61" containerID="47a8ddfc7f7b71da4bd36254308448e4c5ee29fcc63f3b852aed944db5125062" exitCode=0 Dec 03 13:54:59.074640 master-0 kubenswrapper[8988]: I1203 13:54:59.074571 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 13:54:59.075628 master-0 kubenswrapper[8988]: I1203 13:54:59.075541 8988 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" exitCode=1 Dec 03 13:54:59.075628 master-0 kubenswrapper[8988]: I1203 13:54:59.075608 8988 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c" exitCode=0 Dec 03 13:54:59.084444 master-0 kubenswrapper[8988]: I1203 13:54:59.084337 8988 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="c92c50a11c2a662e5059d5ecc58bf830b95d8aca43091af67255e096313ccb46" exitCode=0 Dec 03 13:54:59.091408 master-0 kubenswrapper[8988]: I1203 13:54:59.091300 8988 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" exitCode=1 Dec 03 13:54:59.093924 master-0 kubenswrapper[8988]: I1203 13:54:59.093870 8988 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2" exitCode=1 Dec 03 13:54:59.101514 master-0 kubenswrapper[8988]: I1203 13:54:59.101438 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kk4tm_c777c9de-1ace-46be-b5c2-c71d252f53f4/kube-multus/0.log" Dec 03 13:54:59.101804 master-0 kubenswrapper[8988]: I1203 13:54:59.101538 8988 generic.go:334] "Generic (PLEG): container finished" podID="c777c9de-1ace-46be-b5c2-c71d252f53f4" containerID="eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e" exitCode=1 Dec 03 13:54:59.105984 master-0 kubenswrapper[8988]: I1203 13:54:59.105911 8988 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3" exitCode=0 Dec 03 13:54:59.120862 master-0 kubenswrapper[8988]: E1203 13:54:59.120774 8988 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 13:54:59.144822 master-0 kubenswrapper[8988]: I1203 13:54:59.144741 8988 manager.go:324] Recovery completed Dec 03 13:54:59.188674 master-0 kubenswrapper[8988]: I1203 13:54:59.188346 8988 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 13:54:59.188674 master-0 kubenswrapper[8988]: I1203 13:54:59.188631 8988 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 13:54:59.188674 master-0 kubenswrapper[8988]: I1203 13:54:59.188656 8988 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:54:59.189085 master-0 kubenswrapper[8988]: I1203 13:54:59.188894 8988 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 03 13:54:59.189085 master-0 kubenswrapper[8988]: I1203 13:54:59.188909 8988 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 03 13:54:59.189085 master-0 kubenswrapper[8988]: I1203 13:54:59.188942 8988 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Dec 03 13:54:59.189085 master-0 kubenswrapper[8988]: I1203 13:54:59.188951 8988 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Dec 03 13:54:59.189085 master-0 kubenswrapper[8988]: I1203 13:54:59.188959 8988 policy_none.go:49] "None policy: Start" Dec 03 13:54:59.191922 master-0 kubenswrapper[8988]: I1203 13:54:59.191846 8988 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 13:54:59.192317 master-0 kubenswrapper[8988]: I1203 13:54:59.191975 8988 state_mem.go:35] "Initializing new in-memory state store" Dec 03 13:54:59.192393 master-0 kubenswrapper[8988]: I1203 13:54:59.192363 8988 state_mem.go:75] "Updated machine memory state" Dec 03 13:54:59.192393 master-0 kubenswrapper[8988]: I1203 13:54:59.192386 8988 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Dec 03 13:54:59.204337 master-0 kubenswrapper[8988]: I1203 13:54:59.204249 8988 manager.go:334] "Starting Device Plugin manager" Dec 03 13:54:59.204611 master-0 kubenswrapper[8988]: I1203 13:54:59.204358 8988 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 13:54:59.204611 master-0 kubenswrapper[8988]: I1203 13:54:59.204391 8988 server.go:79] "Starting device plugin registration server" Dec 03 13:54:59.205012 master-0 kubenswrapper[8988]: I1203 13:54:59.204977 8988 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 13:54:59.205105 master-0 kubenswrapper[8988]: I1203 13:54:59.205020 8988 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 13:54:59.205414 master-0 kubenswrapper[8988]: I1203 13:54:59.205302 8988 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 13:54:59.205481 master-0 kubenswrapper[8988]: I1203 13:54:59.205422 8988 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 13:54:59.205481 master-0 kubenswrapper[8988]: I1203 13:54:59.205434 8988 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 13:54:59.559556 master-0 kubenswrapper[8988]: I1203 13:54:59.559227 8988 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0"] Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559730 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"d411a9d4993d118dc0e255c06261c1eb2d14f7c6ba1e4128eeb20ef007aba795"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559805 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"886fbb171cc796081daa33c863e0ffd8e881f69d0055d5d49edec8b6ff9d962d"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559815 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"41b95a38663dd6fe34e183818a475977","Type":"ContainerStarted","Data":"1bb3508306d15f8960c87b184759a4c3c18967fbf7141d9ba4c80335f51e9e09"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559849 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd671840d59b133f88fb03765cdb68615a01b375fa5cbcc45c53662d0aad8d5" Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559877 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab" Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559888 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559898 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559908 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559917 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559944 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559953 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559974 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559984 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.559992 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560002 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560012 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560027 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560036 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"d559032002ae450f2dcc5a6551686ae528fbdc12019934f45dbbd1835ac0a064"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560044 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerDied","Data":"23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3"} Dec 03 13:54:59.560299 master-0 kubenswrapper[8988]: I1203 13:54:59.560053 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"13238af3704fe583f617f61e755cf4c2","Type":"ContainerStarted","Data":"27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95"} Dec 03 13:54:59.561933 master-0 kubenswrapper[8988]: I1203 13:54:59.561888 8988 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:54:59.565796 master-0 kubenswrapper[8988]: I1203 13:54:59.564585 8988 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:54:59.565796 master-0 kubenswrapper[8988]: I1203 13:54:59.564634 8988 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:54:59.565796 master-0 kubenswrapper[8988]: I1203 13:54:59.564644 8988 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:54:59.565796 master-0 kubenswrapper[8988]: I1203 13:54:59.564727 8988 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:54:59.659449 master-0 kubenswrapper[8988]: I1203 13:54:59.659332 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.659449 master-0 kubenswrapper[8988]: I1203 13:54:59.659426 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.659449 master-0 kubenswrapper[8988]: I1203 13:54:59.659456 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659479 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659506 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659566 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659593 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659618 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659642 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659697 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659729 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659754 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659774 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659795 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659818 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659838 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.659876 master-0 kubenswrapper[8988]: I1203 13:54:59.659873 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.761142 master-0 kubenswrapper[8988]: I1203 13:54:59.761013 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761142 master-0 kubenswrapper[8988]: I1203 13:54:59.761128 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761175 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761222 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761305 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761345 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761376 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761409 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761447 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761479 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761512 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761566 master-0 kubenswrapper[8988]: I1203 13:54:59.761543 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761576 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761609 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761640 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761672 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761701 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761873 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.761975 master-0 kubenswrapper[8988]: I1203 13:54:59.761967 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762021 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762069 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762120 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762165 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762242 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762321 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.762414 master-0 kubenswrapper[8988]: I1203 13:54:59.762387 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762464 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762569 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762609 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762597 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762577 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/13238af3704fe583f617f61e755cf4c2-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"13238af3704fe583f617f61e755cf4c2\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:54:59.762682 master-0 kubenswrapper[8988]: I1203 13:54:59.762635 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"etcd-master-0-master-0\" (UID: \"41b95a38663dd6fe34e183818a475977\") " pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:54:59.762898 master-0 kubenswrapper[8988]: I1203 13:54:59.762680 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.762898 master-0 kubenswrapper[8988]: I1203 13:54:59.762710 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:54:59.928315 master-0 kubenswrapper[8988]: I1203 13:54:59.928080 8988 apiserver.go:52] "Watching apiserver" Dec 03 13:54:59.935666 master-0 kubenswrapper[8988]: I1203 13:54:59.935622 8988 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 13:54:59.937904 master-0 kubenswrapper[8988]: I1203 13:54:59.937763 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-multus/multus-kk4tm","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-network-node-identity/network-node-identity-c8csx","openshift-ovn-kubernetes/ovnkube-node-txl6b","kube-system/bootstrap-kube-scheduler-master-0","openshift-etcd/etcd-master-0-master-0","openshift-network-operator/iptables-alerter-n24qb","openshift-network-diagnostics/network-check-target-pcchm","openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/multus-admission-controller-78ddcf56f9-8l84w","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","openshift-multus/network-metrics-daemon-ch7xd","assisted-installer/assisted-installer-controller-stq5g","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","kube-system/bootstrap-kube-controller-manager-master-0","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-cluster-version/cluster-version-operator-869c786959-vrvwt","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg"] Dec 03 13:54:59.938371 master-0 kubenswrapper[8988]: I1203 13:54:59.938337 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:54:59.939338 master-0 kubenswrapper[8988]: I1203 13:54:59.939064 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:54:59.939702 master-0 kubenswrapper[8988]: I1203 13:54:59.939472 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:54:59.942741 master-0 kubenswrapper[8988]: I1203 13:54:59.942666 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:54:59.942950 master-0 kubenswrapper[8988]: I1203 13:54:59.942913 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 13:54:59.943403 master-0 kubenswrapper[8988]: I1203 13:54:59.943088 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 13:54:59.943403 master-0 kubenswrapper[8988]: I1203 13:54:59.943299 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:54:59.943403 master-0 kubenswrapper[8988]: I1203 13:54:59.943378 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.943703 master-0 kubenswrapper[8988]: I1203 13:54:59.943659 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.944754 master-0 kubenswrapper[8988]: I1203 13:54:59.944714 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 13:54:59.944851 master-0 kubenswrapper[8988]: I1203 13:54:59.944788 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 13:54:59.948167 master-0 kubenswrapper[8988]: I1203 13:54:59.945145 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:54:59.948167 master-0 kubenswrapper[8988]: I1203 13:54:59.946556 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:54:59.948167 master-0 kubenswrapper[8988]: I1203 13:54:59.946676 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 13:54:59.948167 master-0 kubenswrapper[8988]: I1203 13:54:59.945173 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:54:59.948167 master-0 kubenswrapper[8988]: I1203 13:54:59.947697 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:54:59.949894 master-0 kubenswrapper[8988]: I1203 13:54:59.948626 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:54:59.949894 master-0 kubenswrapper[8988]: I1203 13:54:59.948796 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 13:54:59.953321 master-0 kubenswrapper[8988]: I1203 13:54:59.952106 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 13:54:59.955064 master-0 kubenswrapper[8988]: I1203 13:54:59.954996 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:54:59.956434 master-0 kubenswrapper[8988]: I1203 13:54:59.956171 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:54:59.957002 master-0 kubenswrapper[8988]: I1203 13:54:59.956967 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 13:54:59.957535 master-0 kubenswrapper[8988]: I1203 13:54:59.957215 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.957836 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.957893 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.957951 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.958036 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.958122 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 13:54:59.958284 master-0 kubenswrapper[8988]: I1203 13:54:59.958212 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 13:54:59.958594 master-0 kubenswrapper[8988]: I1203 13:54:59.958327 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 13:54:59.958594 master-0 kubenswrapper[8988]: I1203 13:54:59.958393 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 13:54:59.958594 master-0 kubenswrapper[8988]: I1203 13:54:59.958419 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.958594 master-0 kubenswrapper[8988]: I1203 13:54:59.958578 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 13:54:59.959616 master-0 kubenswrapper[8988]: I1203 13:54:59.958704 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 13:54:59.959616 master-0 kubenswrapper[8988]: I1203 13:54:59.958768 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 13:54:59.959616 master-0 kubenswrapper[8988]: I1203 13:54:59.958831 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 13:54:59.959616 master-0 kubenswrapper[8988]: I1203 13:54:59.958921 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.959616 master-0 kubenswrapper[8988]: I1203 13:54:59.958325 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.959725 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.959764 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.959891 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960015 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960034 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960072 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960092 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960184 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960215 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960290 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960424 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960505 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.958706 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960741 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.960923 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961062 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961214 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961224 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961294 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961488 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961509 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961527 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961601 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961740 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961767 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961773 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961863 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961877 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.961970 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.962077 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 13:54:59.962143 master-0 kubenswrapper[8988]: I1203 13:54:59.962141 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962467 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962526 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962615 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962470 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962831 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.962953 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963105 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963187 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963199 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963250 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963619 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963954 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963967 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963979 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.963989 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964071 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964105 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964202 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964228 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964627 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964769 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.964902 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.965012 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.965168 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.965248 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.965314 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 13:54:59.965348 master-0 kubenswrapper[8988]: I1203 13:54:59.965379 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.965462 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.965538 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.965555 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.965606 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.965742 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.966354 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.966461 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 13:54:59.966511 master-0 kubenswrapper[8988]: I1203 13:54:59.966464 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 13:54:59.967688 master-0 kubenswrapper[8988]: I1203 13:54:59.967657 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 13:54:59.968757 master-0 kubenswrapper[8988]: I1203 13:54:59.968731 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 13:54:59.975702 master-0 kubenswrapper[8988]: I1203 13:54:59.973952 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 13:54:59.987338 master-0 kubenswrapper[8988]: I1203 13:54:59.987249 8988 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 13:54:59.993432 master-0 kubenswrapper[8988]: I1203 13:54:59.990790 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 13:54:59.993590 master-0 kubenswrapper[8988]: I1203 13:54:59.993531 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 13:54:59.995402 master-0 kubenswrapper[8988]: I1203 13:54:59.993682 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 13:54:59.996496 master-0 kubenswrapper[8988]: I1203 13:54:59.996453 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 13:55:00.063819 master-0 kubenswrapper[8988]: I1203 13:55:00.063627 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.063819 master-0 kubenswrapper[8988]: I1203 13:55:00.063807 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.063819 master-0 kubenswrapper[8988]: I1203 13:55:00.063846 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.063866 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.063916 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.063952 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.063979 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064000 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064022 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064044 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064065 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064084 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064102 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064126 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064161 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064191 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.064229 master-0 kubenswrapper[8988]: I1203 13:55:00.064224 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.064678 master-0 kubenswrapper[8988]: I1203 13:55:00.064252 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.064678 master-0 kubenswrapper[8988]: I1203 13:55:00.064309 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.064678 master-0 kubenswrapper[8988]: I1203 13:55:00.064347 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.064678 master-0 kubenswrapper[8988]: I1203 13:55:00.064534 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.064804 master-0 kubenswrapper[8988]: I1203 13:55:00.064586 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.064804 master-0 kubenswrapper[8988]: I1203 13:55:00.064762 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.064925 master-0 kubenswrapper[8988]: I1203 13:55:00.064827 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.064925 master-0 kubenswrapper[8988]: I1203 13:55:00.064881 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.064925 master-0 kubenswrapper[8988]: I1203 13:55:00.064869 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.065021 master-0 kubenswrapper[8988]: I1203 13:55:00.064919 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.065021 master-0 kubenswrapper[8988]: I1203 13:55:00.064966 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.065021 master-0 kubenswrapper[8988]: I1203 13:55:00.064983 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.065110 master-0 kubenswrapper[8988]: I1203 13:55:00.065012 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.065251 master-0 kubenswrapper[8988]: I1203 13:55:00.065124 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:00.065251 master-0 kubenswrapper[8988]: I1203 13:55:00.065237 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.065341 master-0 kubenswrapper[8988]: I1203 13:55:00.065242 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.065341 master-0 kubenswrapper[8988]: I1203 13:55:00.065243 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.065341 master-0 kubenswrapper[8988]: I1203 13:55:00.065301 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.065434 master-0 kubenswrapper[8988]: I1203 13:55:00.065341 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.065434 master-0 kubenswrapper[8988]: I1203 13:55:00.065385 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:00.065434 master-0 kubenswrapper[8988]: I1203 13:55:00.065426 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.065522 master-0 kubenswrapper[8988]: I1203 13:55:00.065467 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.065522 master-0 kubenswrapper[8988]: I1203 13:55:00.065485 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.065522 master-0 kubenswrapper[8988]: I1203 13:55:00.065510 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:00.065611 master-0 kubenswrapper[8988]: I1203 13:55:00.065552 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.065611 master-0 kubenswrapper[8988]: I1203 13:55:00.065593 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.065685 master-0 kubenswrapper[8988]: I1203 13:55:00.065630 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.065685 master-0 kubenswrapper[8988]: I1203 13:55:00.065667 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.065742 master-0 kubenswrapper[8988]: I1203 13:55:00.065710 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.065784 master-0 kubenswrapper[8988]: I1203 13:55:00.065753 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.065821 master-0 kubenswrapper[8988]: I1203 13:55:00.065788 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.065863 master-0 kubenswrapper[8988]: I1203 13:55:00.065793 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.065951 master-0 kubenswrapper[8988]: I1203 13:55:00.065887 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.065951 master-0 kubenswrapper[8988]: I1203 13:55:00.065937 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066021 master-0 kubenswrapper[8988]: I1203 13:55:00.065948 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.066021 master-0 kubenswrapper[8988]: I1203 13:55:00.066005 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.066082 master-0 kubenswrapper[8988]: I1203 13:55:00.066047 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066125 master-0 kubenswrapper[8988]: I1203 13:55:00.066082 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.066125 master-0 kubenswrapper[8988]: I1203 13:55:00.066106 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.066185 master-0 kubenswrapper[8988]: I1203 13:55:00.066131 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.066185 master-0 kubenswrapper[8988]: I1203 13:55:00.066144 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.066185 master-0 kubenswrapper[8988]: I1203 13:55:00.066163 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.066279 master-0 kubenswrapper[8988]: I1203 13:55:00.066189 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.066279 master-0 kubenswrapper[8988]: I1203 13:55:00.066214 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.066279 master-0 kubenswrapper[8988]: I1203 13:55:00.066237 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.066391 master-0 kubenswrapper[8988]: I1203 13:55:00.066282 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.066391 master-0 kubenswrapper[8988]: I1203 13:55:00.066306 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.066391 master-0 kubenswrapper[8988]: I1203 13:55:00.066295 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.066391 master-0 kubenswrapper[8988]: I1203 13:55:00.066329 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066504 master-0 kubenswrapper[8988]: I1203 13:55:00.066440 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.066504 master-0 kubenswrapper[8988]: I1203 13:55:00.066443 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.066559 master-0 kubenswrapper[8988]: I1203 13:55:00.066539 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.066589 master-0 kubenswrapper[8988]: I1203 13:55:00.066571 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.066636 master-0 kubenswrapper[8988]: I1203 13:55:00.066601 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.066636 master-0 kubenswrapper[8988]: I1203 13:55:00.066623 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.066706 master-0 kubenswrapper[8988]: I1203 13:55:00.066653 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:00.066706 master-0 kubenswrapper[8988]: I1203 13:55:00.066689 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.066766 master-0 kubenswrapper[8988]: I1203 13:55:00.066710 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.066766 master-0 kubenswrapper[8988]: I1203 13:55:00.066742 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066827 master-0 kubenswrapper[8988]: I1203 13:55:00.066785 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066858 master-0 kubenswrapper[8988]: I1203 13:55:00.066837 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.066916 master-0 kubenswrapper[8988]: I1203 13:55:00.066886 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.066957 master-0 kubenswrapper[8988]: I1203 13:55:00.066931 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.067108 master-0 kubenswrapper[8988]: I1203 13:55:00.067070 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:00.067150 master-0 kubenswrapper[8988]: I1203 13:55:00.067108 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.067150 master-0 kubenswrapper[8988]: I1203 13:55:00.067135 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:00.067221 master-0 kubenswrapper[8988]: I1203 13:55:00.067160 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.067221 master-0 kubenswrapper[8988]: I1203 13:55:00.067191 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.067311 master-0 kubenswrapper[8988]: I1203 13:55:00.067219 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.067311 master-0 kubenswrapper[8988]: I1203 13:55:00.067276 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.067311 master-0 kubenswrapper[8988]: I1203 13:55:00.067303 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.067404 master-0 kubenswrapper[8988]: I1203 13:55:00.067326 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.067404 master-0 kubenswrapper[8988]: I1203 13:55:00.067358 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.067404 master-0 kubenswrapper[8988]: I1203 13:55:00.067394 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.067520 master-0 kubenswrapper[8988]: I1203 13:55:00.067420 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.067520 master-0 kubenswrapper[8988]: I1203 13:55:00.067432 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.067706 master-0 kubenswrapper[8988]: I1203 13:55:00.067453 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.067706 master-0 kubenswrapper[8988]: I1203 13:55:00.067600 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.067706 master-0 kubenswrapper[8988]: I1203 13:55:00.067627 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.067706 master-0 kubenswrapper[8988]: I1203 13:55:00.067677 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:00.067839 master-0 kubenswrapper[8988]: I1203 13:55:00.067744 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.067839 master-0 kubenswrapper[8988]: I1203 13:55:00.067805 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.067939 master-0 kubenswrapper[8988]: I1203 13:55:00.067847 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.067939 master-0 kubenswrapper[8988]: I1203 13:55:00.067862 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.067933 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068045 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068088 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068107 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068136 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068173 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068191 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068214 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.067600 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068247 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068043 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068008 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068287 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068352 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068380 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068285 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068412 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068413 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068386 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068464 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068481 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068521 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068543 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.068522 master-0 kubenswrapper[8988]: I1203 13:55:00.068558 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068753 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068792 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068804 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068815 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068638 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068900 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.068971 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069014 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069215 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069236 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069313 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069250 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069388 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069432 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069464 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069435 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069523 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069553 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069597 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069625 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:55:00.069672 master-0 kubenswrapper[8988]: I1203 13:55:00.069662 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069701 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069742 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069775 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069813 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069846 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069883 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069914 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069948 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.069981 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070048 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070083 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070115 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070141 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070172 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070194 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070219 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070242 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070291 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070195 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070504 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.070531 master-0 kubenswrapper[8988]: I1203 13:55:00.070514 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070551 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070658 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070682 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070663 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070689 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070747 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070833 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.071176 master-0 kubenswrapper[8988]: I1203 13:55:00.070886 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.171180 master-0 kubenswrapper[8988]: I1203 13:55:00.171103 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.171180 master-0 kubenswrapper[8988]: I1203 13:55:00.171176 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171211 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171289 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171322 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171347 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171374 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: E1203 13:55:00.171387 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:00.171514 master-0 kubenswrapper[8988]: I1203 13:55:00.171482 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171524 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171548 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.671516133 +0000 UTC m=+1.859584466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: I1203 13:55:00.171561 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171569 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171652 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: I1203 13:55:00.171670 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: I1203 13:55:00.171522 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: I1203 13:55:00.171689 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171615 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.671587705 +0000 UTC m=+1.859656028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171789 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.671761451 +0000 UTC m=+1.859829804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:00.171816 master-0 kubenswrapper[8988]: E1203 13:55:00.171810 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.671799012 +0000 UTC m=+1.859867375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.171856 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.171891 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: E1203 13:55:00.171917 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.171969 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: E1203 13:55:00.172003 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.671985907 +0000 UTC m=+1.860054200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172049 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172080 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172086 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172121 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172128 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172129 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172186 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: I1203 13:55:00.172213 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172229 master-0 kubenswrapper[8988]: E1203 13:55:00.172181 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172247 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: E1203 13:55:00.172297 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.672251855 +0000 UTC m=+1.860320148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172345 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172372 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172332 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172451 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172464 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172501 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172533 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172565 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172608 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172632 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172661 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172676 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.172681 master-0 kubenswrapper[8988]: I1203 13:55:00.172686 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172720 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172731 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172759 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: E1203 13:55:00.172725 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172790 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: E1203 13:55:00.172818 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.672806661 +0000 UTC m=+1.860875034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172839 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172856 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172862 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172897 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172948 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.172976 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173009 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173020 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173036 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173061 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173070 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: I1203 13:55:00.173085 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173100 master-0 kubenswrapper[8988]: E1203 13:55:00.173106 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173135 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: E1203 13:55:00.173152 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.673138961 +0000 UTC m=+1.861207334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173135 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173137 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173105 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173108 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173243 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173304 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173338 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173380 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173411 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173429 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173440 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173477 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173483 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173517 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173548 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173576 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173585 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173613 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173577 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173634 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173650 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: E1203 13:55:00.173551 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173674 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173677 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: I1203 13:55:00.173701 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.173685 master-0 kubenswrapper[8988]: E1203 13:55:00.173711 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.673697987 +0000 UTC m=+1.861766350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173752 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: E1203 13:55:00.173781 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: E1203 13:55:00.173786 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: E1203 13:55:00.173814 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.67380456 +0000 UTC m=+1.861872863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173781 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173829 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: E1203 13:55:00.173842 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:00.673824801 +0000 UTC m=+1.861893124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173864 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173883 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173917 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173948 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.174459 master-0 kubenswrapper[8988]: I1203 13:55:00.173917 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.681560 master-0 kubenswrapper[8988]: I1203 13:55:00.681358 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.681560 master-0 kubenswrapper[8988]: I1203 13:55:00.681547 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.681652 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681546 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.681690 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681760 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.681735976 +0000 UTC m=+2.869804269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681803 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681601 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681840 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.681829249 +0000 UTC m=+2.869897532 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.681990 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.681977033 +0000 UTC m=+2.870045336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.681950 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682035 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682126 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682163 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682154158 +0000 UTC m=+2.870222441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682232 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682195309 +0000 UTC m=+2.870263632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682072 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682318 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682346 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682365 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682460 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682494 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682485008 +0000 UTC m=+2.870553371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682510 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682502208 +0000 UTC m=+2.870570611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682407 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682549 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: I1203 13:55:00.682637 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682568 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682683 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682632 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682651 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682714 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682702734 +0000 UTC m=+2.870771017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682825 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682812447 +0000 UTC m=+2.870880810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682851 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682844478 +0000 UTC m=+2.870912751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:00.688554 master-0 kubenswrapper[8988]: E1203 13:55:00.682870 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:01.682865679 +0000 UTC m=+2.870933962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:00.897331 master-0 kubenswrapper[8988]: I1203 13:55:00.897218 8988 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 13:55:00.897590 master-0 kubenswrapper[8988]: I1203 13:55:00.897445 8988 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 13:55:00.911086 master-0 kubenswrapper[8988]: I1203 13:55:00.911006 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:55:00.913001 master-0 kubenswrapper[8988]: I1203 13:55:00.912961 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:55:00.916639 master-0 kubenswrapper[8988]: I1203 13:55:00.916571 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:55:00.918018 master-0 kubenswrapper[8988]: I1203 13:55:00.917898 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.921590 master-0 kubenswrapper[8988]: I1203 13:55:00.918471 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:55:00.921590 master-0 kubenswrapper[8988]: I1203 13:55:00.918942 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:00.933313 master-0 kubenswrapper[8988]: I1203 13:55:00.933154 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:00.934163 master-0 kubenswrapper[8988]: I1203 13:55:00.934051 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:55:00.935348 master-0 kubenswrapper[8988]: I1203 13:55:00.934820 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:00.941324 master-0 kubenswrapper[8988]: I1203 13:55:00.937749 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:55:00.941324 master-0 kubenswrapper[8988]: I1203 13:55:00.938427 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:55:00.941324 master-0 kubenswrapper[8988]: I1203 13:55:00.939853 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.941324 master-0 kubenswrapper[8988]: W1203 13:55:00.939954 8988 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Dec 03 13:55:00.946308 master-0 kubenswrapper[8988]: I1203 13:55:00.942306 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:55:00.946308 master-0 kubenswrapper[8988]: E1203 13:55:00.942432 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:55:00.959414 master-0 kubenswrapper[8988]: I1203 13:55:00.958834 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:55:00.961310 master-0 kubenswrapper[8988]: I1203 13:55:00.960077 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:00.961310 master-0 kubenswrapper[8988]: I1203 13:55:00.960481 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:00.961310 master-0 kubenswrapper[8988]: I1203 13:55:00.961020 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:00.961310 master-0 kubenswrapper[8988]: E1203 13:55:00.961096 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:55:00.961310 master-0 kubenswrapper[8988]: I1203 13:55:00.961176 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:55:00.961837 master-0 kubenswrapper[8988]: E1203 13:55:00.961532 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:00.962582 master-0 kubenswrapper[8988]: E1203 13:55:00.962529 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:55:00.963181 master-0 kubenswrapper[8988]: I1203 13:55:00.963153 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:55:00.963277 master-0 kubenswrapper[8988]: E1203 13:55:00.963202 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:55:00.963927 master-0 kubenswrapper[8988]: I1203 13:55:00.963900 8988 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 13:55:00.964045 master-0 kubenswrapper[8988]: I1203 13:55:00.964016 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:55:00.964216 master-0 kubenswrapper[8988]: I1203 13:55:00.964160 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:55:00.964369 master-0 kubenswrapper[8988]: I1203 13:55:00.964318 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:00.964742 master-0 kubenswrapper[8988]: I1203 13:55:00.964717 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:00.982331 master-0 kubenswrapper[8988]: I1203 13:55:00.964824 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:00.982331 master-0 kubenswrapper[8988]: I1203 13:55:00.965228 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:00.982331 master-0 kubenswrapper[8988]: I1203 13:55:00.969796 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:00.982331 master-0 kubenswrapper[8988]: I1203 13:55:00.970418 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:00.982331 master-0 kubenswrapper[8988]: I1203 13:55:00.970621 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:55:00.998707 master-0 kubenswrapper[8988]: I1203 13:55:00.998624 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:55:00.999455 master-0 kubenswrapper[8988]: I1203 13:55:00.999409 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:00.999679 master-0 kubenswrapper[8988]: I1203 13:55:00.999634 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:55:01.144386 master-0 kubenswrapper[8988]: I1203 13:55:01.143242 8988 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:55:01.155703 master-0 kubenswrapper[8988]: I1203 13:55:01.155669 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:01.156093 master-0 kubenswrapper[8988]: E1203 13:55:01.156024 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcpv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-n24qb_openshift-network-operator(6ef37bba-85d9-4303-80c0-aac3dc49d3d9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 13:55:01.157151 master-0 kubenswrapper[8988]: E1203 13:55:01.157121 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-network-operator/iptables-alerter-n24qb" podUID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" Dec 03 13:55:01.618536 master-0 kubenswrapper[8988]: I1203 13:55:01.618430 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:55:01.748367 master-0 kubenswrapper[8988]: I1203 13:55:01.748232 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:01.748367 master-0 kubenswrapper[8988]: I1203 13:55:01.748350 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:01.748367 master-0 kubenswrapper[8988]: I1203 13:55:01.748373 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:01.748367 master-0 kubenswrapper[8988]: I1203 13:55:01.748396 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748419 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748438 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748457 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748475 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748492 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748510 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: I1203 13:55:01.748529 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748611 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748684 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748717 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748745 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748727828 +0000 UTC m=+4.936796111 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748761 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748754259 +0000 UTC m=+4.936822542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748795 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748831 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748637 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748831 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.74877755 +0000 UTC m=+4.936845873 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748873 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748866082 +0000 UTC m=+4.936934355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:01.748869 master-0 kubenswrapper[8988]: E1203 13:55:01.748649 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748872 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748920 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748886 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748881143 +0000 UTC m=+4.936949426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748897 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748986 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748978296 +0000 UTC m=+4.937046579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748999 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.748993846 +0000 UTC m=+4.937062119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.748897 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.749009 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.749005377 +0000 UTC m=+4.937073660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.749026 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.749019927 +0000 UTC m=+4.937088210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.749037 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.749033137 +0000 UTC m=+4.937101420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:01.749393 master-0 kubenswrapper[8988]: E1203 13:55:01.749047 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:03.749043398 +0000 UTC m=+4.937111681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:02.143640 master-0 kubenswrapper[8988]: E1203 13:55:02.143430 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" Dec 03 13:55:02.154360 master-0 kubenswrapper[8988]: E1203 13:55:02.144064 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw8h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000360000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-589f5cdc9d-5h2kn_openshift-cluster-olm-operator(803897bb-580e-4f7a-9be2-583fc607d1f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:02.154360 master-0 kubenswrapper[8988]: E1203 13:55:02.145368 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 13:55:02.793002 master-0 kubenswrapper[8988]: E1203 13:55:02.792608 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" Dec 03 13:55:02.793002 master-0 kubenswrapper[8988]: E1203 13:55:02.792855 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfs27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-667484ff5-n7qz8_openshift-apiserver-operator(1c562495-1290-4792-b4b2-639faa594ae2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:02.794893 master-0 kubenswrapper[8988]: E1203 13:55:02.794760 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 13:55:03.423889 master-0 kubenswrapper[8988]: E1203 13:55:03.423593 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" Dec 03 13:55:03.423889 master-0 kubenswrapper[8988]: E1203 13:55:03.423809 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrngd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-56f5898f45-fhnc5_openshift-service-ca-operator(f1f2d0e1-eaaf-4037-a976-5fc2a942c50c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:03.425198 master-0 kubenswrapper[8988]: E1203 13:55:03.425097 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769666 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769786 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769820 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769850 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769919 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769955 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.769979 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: E1203 13:55:03.769979 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: E1203 13:55:03.770130 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: E1203 13:55:03.770152 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770095247 +0000 UTC m=+8.958163580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: I1203 13:55:03.770006 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:03.770182 master-0 kubenswrapper[8988]: E1203 13:55:03.770166 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770214 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770250 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770200 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770180199 +0000 UTC m=+8.958248542 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770335 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: I1203 13:55:03.770352 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770363 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770353824 +0000 UTC m=+8.958422197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: I1203 13:55:03.770403 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770414 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770440 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770465 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770463 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770452 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770404546 +0000 UTC m=+8.958472829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770581 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770568261 +0000 UTC m=+8.958636664 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770605 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770596521 +0000 UTC m=+8.958664964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: I1203 13:55:03.770639 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770684 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770675054 +0000 UTC m=+8.958743447 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770697 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770691304 +0000 UTC m=+8.958759677 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770710 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770703585 +0000 UTC m=+8.958772038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770732 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770724635 +0000 UTC m=+8.958793008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770733 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:03.770813 master-0 kubenswrapper[8988]: E1203 13:55:03.770778 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:07.770764786 +0000 UTC m=+8.958833069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:05.213363 master-0 kubenswrapper[8988]: I1203 13:55:05.213214 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:55:07.027968 master-0 kubenswrapper[8988]: I1203 13:55:07.027860 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:07.057355 master-0 kubenswrapper[8988]: I1203 13:55:07.057282 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:07.138308 master-0 kubenswrapper[8988]: I1203 13:55:07.138146 8988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:55:07.138308 master-0 kubenswrapper[8988]: I1203 13:55:07.138188 8988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:55:07.482876 master-0 kubenswrapper[8988]: I1203 13:55:07.482637 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:07.488789 master-0 kubenswrapper[8988]: I1203 13:55:07.488709 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:07.808443 master-0 kubenswrapper[8988]: I1203 13:55:07.808357 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:07.808443 master-0 kubenswrapper[8988]: I1203 13:55:07.808434 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808485 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808518 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808543 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808567 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808595 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: E1203 13:55:07.808610 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: E1203 13:55:07.808659 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: I1203 13:55:07.808619 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: E1203 13:55:07.808739 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.808718582 +0000 UTC m=+16.996786865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: E1203 13:55:07.808738 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:07.808792 master-0 kubenswrapper[8988]: E1203 13:55:07.808742 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.808758 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.808747843 +0000 UTC m=+16.996816126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: I1203 13:55:07.808892 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.808610 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.808971 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: I1203 13:55:07.808929 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809005 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.80898887 +0000 UTC m=+16.997057143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809051 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809068 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809045232 +0000 UTC m=+16.997113585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.808755 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809103 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809090443 +0000 UTC m=+16.997158836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809100 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809122 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809113044 +0000 UTC m=+16.997181427 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:07.809220 master-0 kubenswrapper[8988]: E1203 13:55:07.809077 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: I1203 13:55:07.809235 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809297 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809229117 +0000 UTC m=+16.997297410 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809307 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809363 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.80934333 +0000 UTC m=+16.997411743 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809393 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809384751 +0000 UTC m=+16.997453244 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809420 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809407742 +0000 UTC m=+16.997476265 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:07.809704 master-0 kubenswrapper[8988]: E1203 13:55:07.809442 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:15.809433703 +0000 UTC m=+16.997502066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:08.262030 master-0 kubenswrapper[8988]: I1203 13:55:08.258523 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: E1203 13:55:08.272150 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: E1203 13:55:08.272509 8988 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: echo "Copying system trust bundle" Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: fi Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.28_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czfkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-7479ffdf48-hpdzl_openshift-authentication-operator(0535e784-8e28-4090-aa2e-df937910767c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: > logger="UnhandledError" Dec 03 13:55:08.278570 master-0 kubenswrapper[8988]: E1203 13:55:08.273654 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 13:55:08.295490 master-0 kubenswrapper[8988]: I1203 13:55:08.295423 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Dec 03 13:55:08.977404 master-0 kubenswrapper[8988]: I1203 13:55:08.976616 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:08.981684 master-0 kubenswrapper[8988]: I1203 13:55:08.981621 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:08.984166 master-0 kubenswrapper[8988]: E1203 13:55:08.983359 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" Dec 03 13:55:08.984166 master-0 kubenswrapper[8988]: E1203 13:55:08.983604 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fdcfa264ad6a1a653f17399845e605dbc99e8f391cfdd940fd684819d6e91001,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.13,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-b5dddf8f5-kwb74_openshift-kube-controller-manager-operator(b051ae27-7879-448d-b426-4dce76e29739): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:08.985612 master-0 kubenswrapper[8988]: E1203 13:55:08.985554 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 13:55:09.439141 master-0 kubenswrapper[8988]: I1203 13:55:09.439075 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:09.439774 master-0 kubenswrapper[8988]: I1203 13:55:09.439276 8988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:55:09.439774 master-0 kubenswrapper[8988]: I1203 13:55:09.439291 8988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:55:09.468338 master-0 kubenswrapper[8988]: I1203 13:55:09.468078 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:09.523146 master-0 kubenswrapper[8988]: E1203 13:55:09.523055 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" Dec 03 13:55:09.523468 master-0 kubenswrapper[8988]: E1203 13:55:09.523329 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jkbcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-7c4697b5f5-9f69p_openshift-controller-manager-operator(adbcce01-7282-4a75-843a-9623060346f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:09.524663 master-0 kubenswrapper[8988]: E1203 13:55:09.524590 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 13:55:10.116383 master-0 kubenswrapper[8988]: E1203 13:55:10.115935 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" Dec 03 13:55:10.116725 master-0 kubenswrapper[8988]: E1203 13:55:10.116483 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c66ee8a8046344aa58697caaca2d9d714015af8840d3b6ea28d91ab112b2adb,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2fns8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-7b795784b8-44frm_openshift-cluster-storage-operator(c180b512-bf0c-4ddc-a5cf-f04acc830a61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:10.117835 master-0 kubenswrapper[8988]: E1203 13:55:10.117740 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 13:55:10.146919 master-0 kubenswrapper[8988]: I1203 13:55:10.146851 8988 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:55:10.437086 master-0 kubenswrapper[8988]: I1203 13:55:10.436759 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:10.709335 master-0 kubenswrapper[8988]: E1203 13:55:10.709224 8988 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" Dec 03 13:55:10.710278 master-0 kubenswrapper[8988]: E1203 13:55:10.709950 8988 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-storage-version-migrator-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5,Command:[cluster-kube-storage-version-migrator-operator start],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.28,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb6pb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-storage-version-migrator-operator-67c4cff67d-q2lxz_openshift-kube-storage-version-migrator-operator(918ff36b-662f-46ae-b71a-301df7e67735): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 03 13:55:10.711456 master-0 kubenswrapper[8988]: E1203 13:55:10.711347 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 13:55:10.805106 master-0 kubenswrapper[8988]: I1203 13:55:10.804405 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:10.809935 master-0 kubenswrapper[8988]: I1203 13:55:10.809212 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:10.887386 master-0 kubenswrapper[8988]: I1203 13:55:10.887328 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:10.909725 master-0 kubenswrapper[8988]: I1203 13:55:10.909238 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-pcchm"] Dec 03 13:55:10.919700 master-0 kubenswrapper[8988]: W1203 13:55:10.919654 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d38d102_4efe_4ed3_ae23_b1e295cdaccd.slice/crio-783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d WatchSource:0}: Error finding container 783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d: Status 404 returned error can't find the container with id 783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d Dec 03 13:55:10.921866 master-0 kubenswrapper[8988]: I1203 13:55:10.921076 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:55:11.152641 master-0 kubenswrapper[8988]: I1203 13:55:11.151758 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae"} Dec 03 13:55:11.154856 master-0 kubenswrapper[8988]: I1203 13:55:11.154442 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"403adb4ba26d18c6883b3621c8cccb4164ce2519226eafb71472f42c8f4a82f4"} Dec 03 13:55:11.154856 master-0 kubenswrapper[8988]: I1203 13:55:11.154470 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d"} Dec 03 13:55:11.154856 master-0 kubenswrapper[8988]: I1203 13:55:11.154818 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:11.156832 master-0 kubenswrapper[8988]: I1203 13:55:11.156813 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"031ccde9164bce9c6766c132214d7fc14f96617b1164fd862cc2ac3b1e1bb8bf"} Dec 03 13:55:11.161057 master-0 kubenswrapper[8988]: I1203 13:55:11.161023 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:55:15.908326 master-0 kubenswrapper[8988]: I1203 13:55:15.908090 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:15.908326 master-0 kubenswrapper[8988]: I1203 13:55:15.908173 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:15.908326 master-0 kubenswrapper[8988]: I1203 13:55:15.908219 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:15.908326 master-0 kubenswrapper[8988]: I1203 13:55:15.908241 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:15.908326 master-0 kubenswrapper[8988]: I1203 13:55:15.908336 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908365 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908398 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908428 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908449 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908475 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: I1203 13:55:15.908496 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.908657 8988 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.908739 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert podName:ce26e464-9a7c-4b22-a2b4-03706b351455 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.908709626 +0000 UTC m=+33.096777909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert") pod "cluster-version-operator-869c786959-vrvwt" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455") : secret "cluster-version-operator-serving-cert" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909314 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909346 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909335264 +0000 UTC m=+33.097403537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909382 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909401 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909394226 +0000 UTC m=+33.097462509 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "node-tuning-operator-tls" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909442 8988 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909473 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909463848 +0000 UTC m=+33.097532121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : secret "performance-addon-operator-webhook-cert" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909521 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909547 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.90953904 +0000 UTC m=+33.097607323 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:15.909586 master-0 kubenswrapper[8988]: E1203 13:55:15.909594 8988 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909620 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909612242 +0000 UTC m=+33.097680525 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : secret "metrics-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909673 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909701 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909692745 +0000 UTC m=+33.097761028 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909749 8988 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909772 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909765157 +0000 UTC m=+33.097833440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : secret "image-registry-operator-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909824 8988 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909846 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909839639 +0000 UTC m=+33.097907922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : secret "metrics-tls" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909889 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909916 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909907801 +0000 UTC m=+33.097976084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.909959 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:15.910729 master-0 kubenswrapper[8988]: E1203 13:55:15.910000 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.909989253 +0000 UTC m=+33.098057536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:16.175131 master-0 kubenswrapper[8988]: I1203 13:55:16.174921 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c"} Dec 03 13:55:20.191901 master-0 kubenswrapper[8988]: I1203 13:55:20.191385 8988 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="2c07c96ce111810f4abda326bee63148f01fbb43604144637921c7eaf553e422" exitCode=0 Dec 03 13:55:20.193967 master-0 kubenswrapper[8988]: I1203 13:55:20.191522 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"2c07c96ce111810f4abda326bee63148f01fbb43604144637921c7eaf553e422"} Dec 03 13:55:20.195908 master-0 kubenswrapper[8988]: I1203 13:55:20.194366 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"1cc3343c335d6a9b27f34bdbcb883b37627ae437e550513d255ee4be2095c4a9"} Dec 03 13:55:20.198301 master-0 kubenswrapper[8988]: I1203 13:55:20.197315 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"f767adcff9a0e233cd5a0d89a9f43dff3fc735aa20c23293aa5dcee5ce476e89"} Dec 03 13:55:21.051959 master-0 kubenswrapper[8988]: I1203 13:55:21.051876 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-6b8bb995f7-b68p8"] Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: E1203 13:55:21.052070 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: I1203 13:55:21.052091 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: E1203 13:55:21.052099 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: I1203 13:55:21.052106 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: I1203 13:55:21.052178 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 13:55:21.052468 master-0 kubenswrapper[8988]: I1203 13:55:21.052193 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 13:55:21.052854 master-0 kubenswrapper[8988]: I1203 13:55:21.052488 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.055801 master-0 kubenswrapper[8988]: I1203 13:55:21.055744 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 13:55:21.056071 master-0 kubenswrapper[8988]: I1203 13:55:21.055961 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 13:55:21.056586 master-0 kubenswrapper[8988]: I1203 13:55:21.056553 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 13:55:21.056913 master-0 kubenswrapper[8988]: I1203 13:55:21.056873 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 13:55:21.066884 master-0 kubenswrapper[8988]: I1203 13:55:21.066785 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.067305 master-0 kubenswrapper[8988]: I1203 13:55:21.066920 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.067305 master-0 kubenswrapper[8988]: I1203 13:55:21.066969 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.087040 master-0 kubenswrapper[8988]: I1203 13:55:21.086940 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-6b8bb995f7-b68p8"] Dec 03 13:55:21.168289 master-0 kubenswrapper[8988]: I1203 13:55:21.168174 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.168289 master-0 kubenswrapper[8988]: I1203 13:55:21.168299 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.168770 master-0 kubenswrapper[8988]: I1203 13:55:21.168342 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.174171 master-0 kubenswrapper[8988]: I1203 13:55:21.169889 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.181771 master-0 kubenswrapper[8988]: I1203 13:55:21.181712 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.208036 master-0 kubenswrapper[8988]: I1203 13:55:21.207976 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.388233 master-0 kubenswrapper[8988]: I1203 13:55:21.388122 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:55:21.848694 master-0 kubenswrapper[8988]: I1203 13:55:21.847374 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-6b8bb995f7-b68p8"] Dec 03 13:55:21.862037 master-0 kubenswrapper[8988]: W1203 13:55:21.861942 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36da3c2f_860c_4188_a7d7_5b615981a835.slice/crio-3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db WatchSource:0}: Error finding container 3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db: Status 404 returned error can't find the container with id 3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db Dec 03 13:55:22.212277 master-0 kubenswrapper[8988]: I1203 13:55:22.212022 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"dd2b482224494992684139be050546a28c62ac157d9e9264488cb5828b1c1e47"} Dec 03 13:55:22.212277 master-0 kubenswrapper[8988]: I1203 13:55:22.212094 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db"} Dec 03 13:55:22.380210 master-0 kubenswrapper[8988]: I1203 13:55:22.379362 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podStartSLOduration=1.379299936 podStartE2EDuration="1.379299936s" podCreationTimestamp="2025-12-03 13:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:22.379086919 +0000 UTC m=+23.567155222" watchObservedRunningTime="2025-12-03 13:55:22.379299936 +0000 UTC m=+23.567368229" Dec 03 13:55:25.237308 master-0 kubenswrapper[8988]: I1203 13:55:25.236014 8988 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="af97d05966a6b8b6492c95f3ae8bbb3e7b5394709c3c830a0152652cc4e1899b" exitCode=0 Dec 03 13:55:25.237308 master-0 kubenswrapper[8988]: I1203 13:55:25.236175 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"af97d05966a6b8b6492c95f3ae8bbb3e7b5394709c3c830a0152652cc4e1899b"} Dec 03 13:55:25.250511 master-0 kubenswrapper[8988]: I1203 13:55:25.246943 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"70dfdf1d245b899ffd4f89819f8560cdba94451d4d92e6018d477dc269e6ea12"} Dec 03 13:55:25.251025 master-0 kubenswrapper[8988]: I1203 13:55:25.250943 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"25e7a24f654330de81677025ca04a819442a5e884c2ac0658b76adfc9af0ebbb"} Dec 03 13:55:26.259413 master-0 kubenswrapper[8988]: I1203 13:55:26.258869 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"260c925573f93c0439722d8810ce6c195e1dc2d279cb295c92ace13d1222474e"} Dec 03 13:55:26.262570 master-0 kubenswrapper[8988]: I1203 13:55:26.262486 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"9d7457bb900844a16e5e3a7cfd4664192d8040e5785b96d2e474f9f0d185dccc"} Dec 03 13:55:26.264382 master-0 kubenswrapper[8988]: I1203 13:55:26.264332 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"4edfa8a89bc0d5038266241047b9c2dea2c14e6566f232726960cf6811e895c0"} Dec 03 13:55:26.791600 master-0 kubenswrapper[8988]: I1203 13:55:26.790284 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7"] Dec 03 13:55:26.791600 master-0 kubenswrapper[8988]: I1203 13:55:26.791050 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:55:26.815721 master-0 kubenswrapper[8988]: I1203 13:55:26.815579 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7"] Dec 03 13:55:26.847570 master-0 kubenswrapper[8988]: I1203 13:55:26.846057 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:55:26.947977 master-0 kubenswrapper[8988]: I1203 13:55:26.947766 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:55:27.070305 master-0 kubenswrapper[8988]: I1203 13:55:27.068917 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:55:27.134298 master-0 kubenswrapper[8988]: I1203 13:55:27.130884 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:55:27.451643 master-0 kubenswrapper[8988]: I1203 13:55:27.449952 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7"] Dec 03 13:55:27.564468 master-0 kubenswrapper[8988]: W1203 13:55:27.564349 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ae92a3_0ff8_4650_8a7b_e26e4c86c8f4.slice/crio-8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd WatchSource:0}: Error finding container 8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd: Status 404 returned error can't find the container with id 8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd Dec 03 13:55:28.285218 master-0 kubenswrapper[8988]: I1203 13:55:28.285109 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd"} Dec 03 13:55:29.013009 master-0 kubenswrapper[8988]: I1203 13:55:29.012719 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2"] Dec 03 13:55:29.014023 master-0 kubenswrapper[8988]: I1203 13:55:29.013368 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.015695 master-0 kubenswrapper[8988]: I1203 13:55:29.015642 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:55:29.015871 master-0 kubenswrapper[8988]: I1203 13:55:29.015836 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:55:29.016084 master-0 kubenswrapper[8988]: I1203 13:55:29.016020 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:55:29.018787 master-0 kubenswrapper[8988]: I1203 13:55:29.018720 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:55:29.018787 master-0 kubenswrapper[8988]: I1203 13:55:29.018777 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:55:29.020749 master-0 kubenswrapper[8988]: I1203 13:55:29.020436 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:55:29.054100 master-0 kubenswrapper[8988]: I1203 13:55:29.053663 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2"] Dec 03 13:55:29.170498 master-0 kubenswrapper[8988]: I1203 13:55:29.170444 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.170498 master-0 kubenswrapper[8988]: I1203 13:55:29.170507 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.170736 master-0 kubenswrapper[8988]: I1203 13:55:29.170544 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.170736 master-0 kubenswrapper[8988]: I1203 13:55:29.170607 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2btpg\" (UniqueName: \"kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.170736 master-0 kubenswrapper[8988]: I1203 13:55:29.170627 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.244482 master-0 kubenswrapper[8988]: I1203 13:55:29.244362 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg"] Dec 03 13:55:29.245285 master-0 kubenswrapper[8988]: I1203 13:55:29.245073 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:55:29.247328 master-0 kubenswrapper[8988]: I1203 13:55:29.246980 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 13:55:29.247987 master-0 kubenswrapper[8988]: I1203 13:55:29.247907 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 13:55:29.273430 master-0 kubenswrapper[8988]: I1203 13:55:29.273216 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.273752 master-0 kubenswrapper[8988]: I1203 13:55:29.273477 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2btpg\" (UniqueName: \"kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.273752 master-0 kubenswrapper[8988]: I1203 13:55:29.273520 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.273752 master-0 kubenswrapper[8988]: E1203 13:55:29.273504 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Dec 03 13:55:29.273752 master-0 kubenswrapper[8988]: I1203 13:55:29.273674 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.273752 master-0 kubenswrapper[8988]: E1203 13:55:29.273704 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:29.773676695 +0000 UTC m=+30.961745188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "client-ca" not found Dec 03 13:55:29.273973 master-0 kubenswrapper[8988]: I1203 13:55:29.273772 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.273973 master-0 kubenswrapper[8988]: E1203 13:55:29.273852 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Dec 03 13:55:29.273973 master-0 kubenswrapper[8988]: E1203 13:55:29.273961 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:29.773935992 +0000 UTC m=+30.962004465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "config" not found Dec 03 13:55:29.274090 master-0 kubenswrapper[8988]: E1203 13:55:29.273863 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Dec 03 13:55:29.274090 master-0 kubenswrapper[8988]: E1203 13:55:29.273865 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:29.274170 master-0 kubenswrapper[8988]: E1203 13:55:29.274024 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:29.774012314 +0000 UTC m=+30.962080597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "openshift-global-ca" not found Dec 03 13:55:29.274170 master-0 kubenswrapper[8988]: E1203 13:55:29.274148 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:29.774126578 +0000 UTC m=+30.962194941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : secret "serving-cert" not found Dec 03 13:55:29.274665 master-0 kubenswrapper[8988]: I1203 13:55:29.274623 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg"] Dec 03 13:55:29.325295 master-0 kubenswrapper[8988]: I1203 13:55:29.323645 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2btpg\" (UniqueName: \"kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:29.375238 master-0 kubenswrapper[8988]: I1203 13:55:29.374879 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:55:29.475876 master-0 kubenswrapper[8988]: I1203 13:55:29.475549 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:55:29.601714 master-0 kubenswrapper[8988]: I1203 13:55:29.601575 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:55:29.619875 master-0 kubenswrapper[8988]: I1203 13:55:29.619768 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:55:30.112311 master-0 kubenswrapper[8988]: I1203 13:55:30.112196 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:30.119806 master-0 kubenswrapper[8988]: I1203 13:55:30.119324 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:30.120126 master-0 kubenswrapper[8988]: I1203 13:55:30.120091 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.112892 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.120631 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.120604864 +0000 UTC m=+32.308673147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "config" not found Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.120398 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.121113 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.121102199 +0000 UTC m=+32.309170482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : secret "serving-cert" not found Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.120522 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Dec 03 13:55:30.121587 master-0 kubenswrapper[8988]: E1203 13:55:30.121239 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.121224632 +0000 UTC m=+32.309292915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "openshift-global-ca" not found Dec 03 13:55:30.121985 master-0 kubenswrapper[8988]: I1203 13:55:30.121963 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:30.122285 master-0 kubenswrapper[8988]: E1203 13:55:30.122126 8988 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Dec 03 13:55:30.122285 master-0 kubenswrapper[8988]: E1203 13:55:30.122245 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.122237682 +0000 UTC m=+32.310305965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : configmap "client-ca" not found Dec 03 13:55:30.321698 master-0 kubenswrapper[8988]: I1203 13:55:30.320867 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"dff8c52d92948d0d5db37201f109e554786a56d7027a599b541b924b03c573e3"} Dec 03 13:55:30.481022 master-0 kubenswrapper[8988]: I1203 13:55:30.480857 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg"] Dec 03 13:55:30.530222 master-0 kubenswrapper[8988]: I1203 13:55:30.530136 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:30.532085 master-0 kubenswrapper[8988]: I1203 13:55:30.532052 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.539611 master-0 kubenswrapper[8988]: I1203 13:55:30.539048 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 13:55:30.550339 master-0 kubenswrapper[8988]: I1203 13:55:30.549303 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:30.641809 master-0 kubenswrapper[8988]: I1203 13:55:30.641383 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.641809 master-0 kubenswrapper[8988]: I1203 13:55:30.641451 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.641809 master-0 kubenswrapper[8988]: I1203 13:55:30.641527 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.681586 master-0 kubenswrapper[8988]: I1203 13:55:30.681511 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Dec 03 13:55:30.682593 master-0 kubenswrapper[8988]: I1203 13:55:30.682494 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.686198 master-0 kubenswrapper[8988]: I1203 13:55:30.686140 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Dec 03 13:55:30.715214 master-0 kubenswrapper[8988]: I1203 13:55:30.715137 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Dec 03 13:55:30.743248 master-0 kubenswrapper[8988]: I1203 13:55:30.743030 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.743602 master-0 kubenswrapper[8988]: I1203 13:55:30.743346 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.743602 master-0 kubenswrapper[8988]: I1203 13:55:30.743576 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.743988 master-0 kubenswrapper[8988]: I1203 13:55:30.743911 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.744156 master-0 kubenswrapper[8988]: I1203 13:55:30.744114 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.744227 master-0 kubenswrapper[8988]: I1203 13:55:30.744165 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.744227 master-0 kubenswrapper[8988]: I1203 13:55:30.744194 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.744227 master-0 kubenswrapper[8988]: I1203 13:55:30.744191 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.813482 master-0 kubenswrapper[8988]: I1203 13:55:30.813380 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access\") pod \"installer-1-master-0\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.821305 master-0 kubenswrapper[8988]: I1203 13:55:30.821152 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2"] Dec 03 13:55:30.827035 master-0 kubenswrapper[8988]: E1203 13:55:30.825525 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" podUID="70c1c761-4c15-46bd-8b20-10214376974b" Dec 03 13:55:30.844970 master-0 kubenswrapper[8988]: I1203 13:55:30.844874 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.844970 master-0 kubenswrapper[8988]: I1203 13:55:30.844953 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.845412 master-0 kubenswrapper[8988]: I1203 13:55:30.845094 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.845412 master-0 kubenswrapper[8988]: I1203 13:55:30.845134 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.845412 master-0 kubenswrapper[8988]: I1203 13:55:30.845196 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:30.891007 master-0 kubenswrapper[8988]: I1203 13:55:30.886387 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:30.904421 master-0 kubenswrapper[8988]: I1203 13:55:30.904317 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq"] Dec 03 13:55:30.905097 master-0 kubenswrapper[8988]: I1203 13:55:30.905054 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:30.908217 master-0 kubenswrapper[8988]: I1203 13:55:30.908140 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:55:30.908828 master-0 kubenswrapper[8988]: I1203 13:55:30.908752 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:55:30.908900 master-0 kubenswrapper[8988]: I1203 13:55:30.908856 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:55:30.909008 master-0 kubenswrapper[8988]: I1203 13:55:30.908940 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:55:30.909069 master-0 kubenswrapper[8988]: I1203 13:55:30.908863 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:55:30.943752 master-0 kubenswrapper[8988]: I1203 13:55:30.943483 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq"] Dec 03 13:55:30.945965 master-0 kubenswrapper[8988]: I1203 13:55:30.945900 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:30.946062 master-0 kubenswrapper[8988]: I1203 13:55:30.945996 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9lp\" (UniqueName: \"kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:30.946122 master-0 kubenswrapper[8988]: I1203 13:55:30.946088 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:30.946173 master-0 kubenswrapper[8988]: I1203 13:55:30.946137 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:30.955748 master-0 kubenswrapper[8988]: I1203 13:55:30.955685 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access\") pod \"installer-1-master-0\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:31.016423 master-0 kubenswrapper[8988]: I1203 13:55:31.010778 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 13:55:31.046814 master-0 kubenswrapper[8988]: I1203 13:55:31.046736 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9lp\" (UniqueName: \"kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.046814 master-0 kubenswrapper[8988]: I1203 13:55:31.046827 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.047240 master-0 kubenswrapper[8988]: I1203 13:55:31.046862 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.047240 master-0 kubenswrapper[8988]: I1203 13:55:31.046954 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.047343 master-0 kubenswrapper[8988]: E1203 13:55:31.047219 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:31.047427 master-0 kubenswrapper[8988]: E1203 13:55:31.047397 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:31.547363587 +0000 UTC m=+32.735431870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : secret "serving-cert" not found Dec 03 13:55:31.048390 master-0 kubenswrapper[8988]: I1203 13:55:31.048351 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.048462 master-0 kubenswrapper[8988]: I1203 13:55:31.048419 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.148752 master-0 kubenswrapper[8988]: I1203 13:55:31.148634 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.148752 master-0 kubenswrapper[8988]: I1203 13:55:31.148763 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.148752 master-0 kubenswrapper[8988]: I1203 13:55:31.148794 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.150042 master-0 kubenswrapper[8988]: I1203 13:55:31.148835 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.150042 master-0 kubenswrapper[8988]: E1203 13:55:31.149616 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:31.150042 master-0 kubenswrapper[8988]: E1203 13:55:31.149767 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:33.149737396 +0000 UTC m=+34.337805669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : secret "serving-cert" not found Dec 03 13:55:31.150607 master-0 kubenswrapper[8988]: I1203 13:55:31.150117 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.150853 master-0 kubenswrapper[8988]: I1203 13:55:31.150773 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.151122 master-0 kubenswrapper[8988]: I1203 13:55:31.151045 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.326023 master-0 kubenswrapper[8988]: I1203 13:55:31.325934 8988 generic.go:334] "Generic (PLEG): container finished" podID="0535e784-8e28-4090-aa2e-df937910767c" containerID="70dfdf1d245b899ffd4f89819f8560cdba94451d4d92e6018d477dc269e6ea12" exitCode=0 Dec 03 13:55:31.326023 master-0 kubenswrapper[8988]: I1203 13:55:31.326015 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerDied","Data":"70dfdf1d245b899ffd4f89819f8560cdba94451d4d92e6018d477dc269e6ea12"} Dec 03 13:55:31.326710 master-0 kubenswrapper[8988]: I1203 13:55:31.326669 8988 scope.go:117] "RemoveContainer" containerID="70dfdf1d245b899ffd4f89819f8560cdba94451d4d92e6018d477dc269e6ea12" Dec 03 13:55:31.328425 master-0 kubenswrapper[8988]: I1203 13:55:31.328365 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/0.log" Dec 03 13:55:31.328523 master-0 kubenswrapper[8988]: I1203 13:55:31.328429 8988 generic.go:334] "Generic (PLEG): container finished" podID="adbcce01-7282-4a75-843a-9623060346f0" containerID="9d7457bb900844a16e5e3a7cfd4664192d8040e5785b96d2e474f9f0d185dccc" exitCode=1 Dec 03 13:55:31.328523 master-0 kubenswrapper[8988]: I1203 13:55:31.328490 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.328638 master-0 kubenswrapper[8988]: I1203 13:55:31.328516 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerDied","Data":"9d7457bb900844a16e5e3a7cfd4664192d8040e5785b96d2e474f9f0d185dccc"} Dec 03 13:55:31.328834 master-0 kubenswrapper[8988]: I1203 13:55:31.328805 8988 scope.go:117] "RemoveContainer" containerID="9d7457bb900844a16e5e3a7cfd4664192d8040e5785b96d2e474f9f0d185dccc" Dec 03 13:55:31.335546 master-0 kubenswrapper[8988]: I1203 13:55:31.335490 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:31.458508 master-0 kubenswrapper[8988]: I1203 13:55:31.457767 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") pod \"70c1c761-4c15-46bd-8b20-10214376974b\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " Dec 03 13:55:31.458508 master-0 kubenswrapper[8988]: I1203 13:55:31.458458 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca" (OuterVolumeSpecName: "client-ca") pod "70c1c761-4c15-46bd-8b20-10214376974b" (UID: "70c1c761-4c15-46bd-8b20-10214376974b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:31.458940 master-0 kubenswrapper[8988]: I1203 13:55:31.458604 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2btpg\" (UniqueName: \"kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg\") pod \"70c1c761-4c15-46bd-8b20-10214376974b\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " Dec 03 13:55:31.458940 master-0 kubenswrapper[8988]: I1203 13:55:31.458773 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") pod \"70c1c761-4c15-46bd-8b20-10214376974b\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " Dec 03 13:55:31.458940 master-0 kubenswrapper[8988]: I1203 13:55:31.458802 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") pod \"70c1c761-4c15-46bd-8b20-10214376974b\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " Dec 03 13:55:31.459354 master-0 kubenswrapper[8988]: I1203 13:55:31.459323 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:31.459664 master-0 kubenswrapper[8988]: I1203 13:55:31.459587 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config" (OuterVolumeSpecName: "config") pod "70c1c761-4c15-46bd-8b20-10214376974b" (UID: "70c1c761-4c15-46bd-8b20-10214376974b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:31.459750 master-0 kubenswrapper[8988]: I1203 13:55:31.459699 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "70c1c761-4c15-46bd-8b20-10214376974b" (UID: "70c1c761-4c15-46bd-8b20-10214376974b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:31.462556 master-0 kubenswrapper[8988]: I1203 13:55:31.462465 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg" (OuterVolumeSpecName: "kube-api-access-2btpg") pod "70c1c761-4c15-46bd-8b20-10214376974b" (UID: "70c1c761-4c15-46bd-8b20-10214376974b"). InnerVolumeSpecName "kube-api-access-2btpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:31.560960 master-0 kubenswrapper[8988]: I1203 13:55:31.560830 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:31.560960 master-0 kubenswrapper[8988]: I1203 13:55:31.560984 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2btpg\" (UniqueName: \"kubernetes.io/projected/70c1c761-4c15-46bd-8b20-10214376974b-kube-api-access-2btpg\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:31.561712 master-0 kubenswrapper[8988]: I1203 13:55:31.561000 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:31.561712 master-0 kubenswrapper[8988]: I1203 13:55:31.561011 8988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/70c1c761-4c15-46bd-8b20-10214376974b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:31.561712 master-0 kubenswrapper[8988]: E1203 13:55:31.561150 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:31.561712 master-0 kubenswrapper[8988]: E1203 13:55:31.561225 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:32.561201167 +0000 UTC m=+33.749269460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : secret "serving-cert" not found Dec 03 13:55:31.967049 master-0 kubenswrapper[8988]: I1203 13:55:31.966545 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: I1203 13:55:31.967077 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: I1203 13:55:31.967116 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: E1203 13:55:31.966739 8988 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: E1203 13:55:31.967295 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 13:56:03.967234829 +0000 UTC m=+65.155303292 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : secret "metrics-daemon-secret" not found Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: E1203 13:55:31.967305 8988 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: I1203 13:55:31.967145 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: E1203 13:55:31.967383 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs podName:63aae3b9-9a72-497e-af01-5d8b8d0ac876 nodeName:}" failed. No retries permitted until 2025-12-03 13:56:03.967359613 +0000 UTC m=+65.155428106 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs") pod "multus-admission-controller-78ddcf56f9-8l84w" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876") : secret "multus-admission-controller-secret" not found Dec 03 13:55:31.967417 master-0 kubenswrapper[8988]: I1203 13:55:31.967415 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:31.967774 master-0 kubenswrapper[8988]: I1203 13:55:31.967472 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:31.967774 master-0 kubenswrapper[8988]: I1203 13:55:31.967501 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:55:31.967774 master-0 kubenswrapper[8988]: E1203 13:55:31.967406 8988 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Dec 03 13:55:31.967774 master-0 kubenswrapper[8988]: I1203 13:55:31.967528 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:31.967902 master-0 kubenswrapper[8988]: E1203 13:55:31.967633 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 13:56:03.96760656 +0000 UTC m=+65.155674903 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : secret "package-server-manager-serving-cert" not found Dec 03 13:55:31.967902 master-0 kubenswrapper[8988]: E1203 13:55:31.967773 8988 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Dec 03 13:55:31.967902 master-0 kubenswrapper[8988]: E1203 13:55:31.967871 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 13:56:03.967854277 +0000 UTC m=+65.155922730 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : secret "marketplace-operator-metrics" not found Dec 03 13:55:31.967902 master-0 kubenswrapper[8988]: I1203 13:55:31.967871 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:55:31.968078 master-0 kubenswrapper[8988]: I1203 13:55:31.967950 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:31.968078 master-0 kubenswrapper[8988]: I1203 13:55:31.967994 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:31.968078 master-0 kubenswrapper[8988]: E1203 13:55:31.967997 8988 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:31.968228 master-0 kubenswrapper[8988]: E1203 13:55:31.968105 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 13:56:03.968068014 +0000 UTC m=+65.156136497 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : secret "cluster-monitoring-operator-tls" not found Dec 03 13:55:31.972002 master-0 kubenswrapper[8988]: I1203 13:55:31.971947 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:31.972271 master-0 kubenswrapper[8988]: I1203 13:55:31.972213 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:31.972342 master-0 kubenswrapper[8988]: I1203 13:55:31.972297 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:31.972342 master-0 kubenswrapper[8988]: I1203 13:55:31.972309 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:31.972546 master-0 kubenswrapper[8988]: I1203 13:55:31.972492 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:31.972956 master-0 kubenswrapper[8988]: I1203 13:55:31.972904 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"cluster-version-operator-869c786959-vrvwt\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:32.039525 master-0 kubenswrapper[8988]: I1203 13:55:32.039433 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:55:32.040519 master-0 kubenswrapper[8988]: I1203 13:55:32.040408 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:55:32.043937 master-0 kubenswrapper[8988]: I1203 13:55:32.043891 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:55:32.044651 master-0 kubenswrapper[8988]: I1203 13:55:32.044625 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:55:32.047622 master-0 kubenswrapper[8988]: I1203 13:55:32.047589 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:55:32.334498 master-0 kubenswrapper[8988]: I1203 13:55:32.334429 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:32.577789 master-0 kubenswrapper[8988]: I1203 13:55:32.577639 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:32.578171 master-0 kubenswrapper[8988]: E1203 13:55:32.577998 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:32.578301 master-0 kubenswrapper[8988]: E1203 13:55:32.578243 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:34.578203494 +0000 UTC m=+35.766271967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : secret "serving-cert" not found Dec 03 13:55:33.127591 master-0 kubenswrapper[8988]: W1203 13:55:33.127005 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f723d97_5c65_4ae7_9085_26db8b4f2f52.slice/crio-cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968 WatchSource:0}: Error finding container cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968: Status 404 returned error can't find the container with id cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968 Dec 03 13:55:33.149269 master-0 kubenswrapper[8988]: I1203 13:55:33.149159 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9lp\" (UniqueName: \"kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:33.185173 master-0 kubenswrapper[8988]: I1203 13:55:33.185125 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:33.185421 master-0 kubenswrapper[8988]: E1203 13:55:33.185387 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 13:55:33.185523 master-0 kubenswrapper[8988]: E1203 13:55:33.185464 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:37.18544333 +0000 UTC m=+38.373511613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 13:55:33.353053 master-0 kubenswrapper[8988]: I1203 13:55:33.351727 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" event={"ID":"ce26e464-9a7c-4b22-a2b4-03706b351455","Type":"ContainerStarted","Data":"7fb4e2d334a547fbeaaea1fa9c53c41549464da1350be876ed579d7818ec2701"} Dec 03 13:55:33.374637 master-0 kubenswrapper[8988]: I1203 13:55:33.374210 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968"} Dec 03 13:55:34.604904 master-0 kubenswrapper[8988]: I1203 13:55:34.604743 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:34.609683 master-0 kubenswrapper[8988]: E1203 13:55:34.605004 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:34.609683 master-0 kubenswrapper[8988]: E1203 13:55:34.605150 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:38.605129514 +0000 UTC m=+39.793197797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : secret "serving-cert" not found Dec 03 13:55:37.683806 master-0 kubenswrapper[8988]: I1203 13:55:37.676426 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") pod \"controller-manager-56fb5cd58b-5hnj2\" (UID: \"70c1c761-4c15-46bd-8b20-10214376974b\") " pod="openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2" Dec 03 13:55:37.683806 master-0 kubenswrapper[8988]: E1203 13:55:37.676830 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 13:55:37.683806 master-0 kubenswrapper[8988]: E1203 13:55:37.676910 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert podName:70c1c761-4c15-46bd-8b20-10214376974b nodeName:}" failed. No retries permitted until 2025-12-03 13:55:45.676872562 +0000 UTC m=+46.864940845 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert") pod "controller-manager-56fb5cd58b-5hnj2" (UID: "70c1c761-4c15-46bd-8b20-10214376974b") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 13:55:37.700157 master-0 kubenswrapper[8988]: I1203 13:55:37.694560 8988 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:55:38.686245 master-0 kubenswrapper[8988]: I1203 13:55:38.685875 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:38.686245 master-0 kubenswrapper[8988]: E1203 13:55:38.686200 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:38.686245 master-0 kubenswrapper[8988]: E1203 13:55:38.686370 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:46.686336244 +0000 UTC m=+47.874404707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : secret "serving-cert" not found Dec 03 13:55:38.705145 master-0 kubenswrapper[8988]: I1203 13:55:38.704964 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"97750e8f74736f079d18144b0d15743bbfdb9d2f2d7177cc7d677c65b0c4a40e"} Dec 03 13:55:38.708091 master-0 kubenswrapper[8988]: I1203 13:55:38.708034 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/0.log" Dec 03 13:55:38.708335 master-0 kubenswrapper[8988]: I1203 13:55:38.708109 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"594fb0126cf93faf50cc852686eaa0e96acf2a43e60f5721648d7a6bb2d3b91d"} Dec 03 13:55:42.261728 master-0 kubenswrapper[8988]: I1203 13:55:42.259050 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Dec 03 13:55:42.261728 master-0 kubenswrapper[8988]: I1203 13:55:42.260142 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm"] Dec 03 13:55:42.270379 master-0 kubenswrapper[8988]: I1203 13:55:42.265737 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8"] Dec 03 13:55:42.286557 master-0 kubenswrapper[8988]: I1203 13:55:42.274066 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5"] Dec 03 13:55:42.294299 master-0 kubenswrapper[8988]: I1203 13:55:42.290726 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:42.294299 master-0 kubenswrapper[8988]: I1203 13:55:42.290806 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz"] Dec 03 13:55:42.527418 master-0 kubenswrapper[8988]: I1203 13:55:42.523250 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7c895b7864-fxr2k"] Dec 03 13:55:42.527418 master-0 kubenswrapper[8988]: I1203 13:55:42.524589 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.548447 master-0 kubenswrapper[8988]: I1203 13:55:42.546386 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Dec 03 13:55:42.548447 master-0 kubenswrapper[8988]: I1203 13:55:42.546867 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 13:55:42.548447 master-0 kubenswrapper[8988]: I1203 13:55:42.547661 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 13:55:42.548447 master-0 kubenswrapper[8988]: I1203 13:55:42.547710 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 13:55:42.549405 master-0 kubenswrapper[8988]: I1203 13:55:42.548993 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 13:55:42.549405 master-0 kubenswrapper[8988]: I1203 13:55:42.549183 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 13:55:42.549405 master-0 kubenswrapper[8988]: I1203 13:55:42.549300 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Dec 03 13:55:42.549405 master-0 kubenswrapper[8988]: I1203 13:55:42.549328 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 13:55:42.562168 master-0 kubenswrapper[8988]: I1203 13:55:42.549556 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 13:55:42.562168 master-0 kubenswrapper[8988]: I1203 13:55:42.559653 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643191 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643274 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643306 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643339 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643362 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643398 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643432 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643506 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644057 master-0 kubenswrapper[8988]: I1203 13:55:42.643877 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644693 master-0 kubenswrapper[8988]: I1203 13:55:42.644089 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.644693 master-0 kubenswrapper[8988]: I1203 13:55:42.644141 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9mgz\" (UniqueName: \"kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.647970 master-0 kubenswrapper[8988]: I1203 13:55:42.647899 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7c895b7864-fxr2k"] Dec 03 13:55:42.745756 master-0 kubenswrapper[8988]: I1203 13:55:42.745681 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.745756 master-0 kubenswrapper[8988]: I1203 13:55:42.745759 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.745756 master-0 kubenswrapper[8988]: I1203 13:55:42.745784 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.745813 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.745843 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.745875 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.745905 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.745981 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.746012 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.746067 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.746438 master-0 kubenswrapper[8988]: I1203 13:55:42.746110 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9mgz\" (UniqueName: \"kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.747106 master-0 kubenswrapper[8988]: I1203 13:55:42.746791 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.747106 master-0 kubenswrapper[8988]: E1203 13:55:42.746914 8988 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Dec 03 13:55:42.747372 master-0 kubenswrapper[8988]: E1203 13:55:42.747330 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit podName:6498710c-a5d8-4465-b96f-93dc0db63d62 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:43.247205982 +0000 UTC m=+44.435274435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit") pod "apiserver-7c895b7864-fxr2k" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62") : configmap "audit-0" not found Dec 03 13:55:42.747491 master-0 kubenswrapper[8988]: I1203 13:55:42.747462 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.749477 master-0 kubenswrapper[8988]: I1203 13:55:42.748326 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.749477 master-0 kubenswrapper[8988]: I1203 13:55:42.749011 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.749477 master-0 kubenswrapper[8988]: E1203 13:55:42.749448 8988 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Dec 03 13:55:42.750718 master-0 kubenswrapper[8988]: E1203 13:55:42.749534 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert podName:6498710c-a5d8-4465-b96f-93dc0db63d62 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:43.24951888 +0000 UTC m=+44.437587163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert") pod "apiserver-7c895b7864-fxr2k" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62") : secret "serving-cert" not found Dec 03 13:55:42.767347 master-0 kubenswrapper[8988]: I1203 13:55:42.764419 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.768108 master-0 kubenswrapper[8988]: I1203 13:55:42.767994 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.782237 master-0 kubenswrapper[8988]: I1203 13:55:42.782008 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.789660 master-0 kubenswrapper[8988]: I1203 13:55:42.789602 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.795791 master-0 kubenswrapper[8988]: I1203 13:55:42.795210 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9mgz\" (UniqueName: \"kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:42.816420 master-0 kubenswrapper[8988]: I1203 13:55:42.815606 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:42.817042 master-0 kubenswrapper[8988]: I1203 13:55:42.817002 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:42.819941 master-0 kubenswrapper[8988]: I1203 13:55:42.819846 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:55:42.823212 master-0 kubenswrapper[8988]: I1203 13:55:42.821186 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:55:42.823212 master-0 kubenswrapper[8988]: I1203 13:55:42.821196 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:55:42.823212 master-0 kubenswrapper[8988]: I1203 13:55:42.821772 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:55:42.823212 master-0 kubenswrapper[8988]: I1203 13:55:42.821991 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:55:42.830102 master-0 kubenswrapper[8988]: I1203 13:55:42.829733 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:55:42.917126 master-0 kubenswrapper[8988]: I1203 13:55:42.914590 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:42.924863 master-0 kubenswrapper[8988]: I1203 13:55:42.918701 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2"] Dec 03 13:55:42.924863 master-0 kubenswrapper[8988]: I1203 13:55:42.924766 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56fb5cd58b-5hnj2"] Dec 03 13:55:42.948301 master-0 kubenswrapper[8988]: I1203 13:55:42.948210 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5wmw\" (UniqueName: \"kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:42.948301 master-0 kubenswrapper[8988]: I1203 13:55:42.948305 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:42.948635 master-0 kubenswrapper[8988]: I1203 13:55:42.948376 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:42.948635 master-0 kubenswrapper[8988]: I1203 13:55:42.948402 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:42.948635 master-0 kubenswrapper[8988]: I1203 13:55:42.948439 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.039431 master-0 kubenswrapper[8988]: I1203 13:55:43.038765 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c1c761-4c15-46bd-8b20-10214376974b" path="/var/lib/kubelet/pods/70c1c761-4c15-46bd-8b20-10214376974b/volumes" Dec 03 13:55:43.049563 master-0 kubenswrapper[8988]: I1203 13:55:43.049465 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5wmw\" (UniqueName: \"kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.049563 master-0 kubenswrapper[8988]: I1203 13:55:43.049537 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.049975 master-0 kubenswrapper[8988]: I1203 13:55:43.049597 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.049975 master-0 kubenswrapper[8988]: I1203 13:55:43.049618 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.049975 master-0 kubenswrapper[8988]: I1203 13:55:43.049647 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.049975 master-0 kubenswrapper[8988]: I1203 13:55:43.049693 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70c1c761-4c15-46bd-8b20-10214376974b-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:43.050144 master-0 kubenswrapper[8988]: E1203 13:55:43.050066 8988 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Dec 03 13:55:43.050190 master-0 kubenswrapper[8988]: E1203 13:55:43.050148 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert podName:806ba31c-9e91-469f-9b47-556d22efb642 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:43.550124305 +0000 UTC m=+44.738192588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert") pod "controller-manager-6c79f444f7-8rmss" (UID: "806ba31c-9e91-469f-9b47-556d22efb642") : secret "serving-cert" not found Dec 03 13:55:43.051224 master-0 kubenswrapper[8988]: I1203 13:55:43.050681 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.055308 master-0 kubenswrapper[8988]: I1203 13:55:43.051484 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.061140 master-0 kubenswrapper[8988]: I1203 13:55:43.061050 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.106716 master-0 kubenswrapper[8988]: I1203 13:55:43.106571 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5wmw\" (UniqueName: \"kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.243410 master-0 kubenswrapper[8988]: W1203 13:55:43.243322 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc78129_4a81_410e_9a42_b12043b5a75a.slice/crio-f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d WatchSource:0}: Error finding container f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d: Status 404 returned error can't find the container with id f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d Dec 03 13:55:43.258187 master-0 kubenswrapper[8988]: I1203 13:55:43.258092 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:43.258398 master-0 kubenswrapper[8988]: I1203 13:55:43.258197 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:43.258479 master-0 kubenswrapper[8988]: E1203 13:55:43.258436 8988 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Dec 03 13:55:43.258537 master-0 kubenswrapper[8988]: E1203 13:55:43.258510 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit podName:6498710c-a5d8-4465-b96f-93dc0db63d62 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:44.258489916 +0000 UTC m=+45.446558199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit") pod "apiserver-7c895b7864-fxr2k" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62") : configmap "audit-0" not found Dec 03 13:55:43.263341 master-0 kubenswrapper[8988]: I1203 13:55:43.263193 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:43.271231 master-0 kubenswrapper[8988]: W1203 13:55:43.269973 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod51ff18d8_5b58_4f9b_b20d_13c38531dfc9.slice/crio-3bf2735349828f87978eae4fdda4fa09b8cb53cd6a5f8697617c07d85ece0287 WatchSource:0}: Error finding container 3bf2735349828f87978eae4fdda4fa09b8cb53cd6a5f8697617c07d85ece0287: Status 404 returned error can't find the container with id 3bf2735349828f87978eae4fdda4fa09b8cb53cd6a5f8697617c07d85ece0287 Dec 03 13:55:43.585284 master-0 kubenswrapper[8988]: I1203 13:55:43.585142 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.589783 master-0 kubenswrapper[8988]: I1203 13:55:43.589627 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") pod \"controller-manager-6c79f444f7-8rmss\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.731215 master-0 kubenswrapper[8988]: I1203 13:55:43.731118 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"44ddc337512cf47184ee9f63cffcf4b3f72f69c2c567abe7ddd38b25975bdf7c"} Dec 03 13:55:43.732210 master-0 kubenswrapper[8988]: I1203 13:55:43.732177 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"b23778ca4e9ae4dbb3de59134916161ec83a634b903bdd6f9ff3c7980d2471f9"} Dec 03 13:55:43.733050 master-0 kubenswrapper[8988]: I1203 13:55:43.733020 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"51ff18d8-5b58-4f9b-b20d-13c38531dfc9","Type":"ContainerStarted","Data":"3bf2735349828f87978eae4fdda4fa09b8cb53cd6a5f8697617c07d85ece0287"} Dec 03 13:55:43.734076 master-0 kubenswrapper[8988]: I1203 13:55:43.734042 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d"} Dec 03 13:55:43.734915 master-0 kubenswrapper[8988]: I1203 13:55:43.734881 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6","Type":"ContainerStarted","Data":"6721d25c3e1746543af1f9a5ba41f4231e36342be8a55d2319256cbd81592116"} Dec 03 13:55:43.735965 master-0 kubenswrapper[8988]: I1203 13:55:43.735934 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"939b2c92bcdf7cd1a7639905546ba592e8fa9fac9978494aea2a13c1b29704e8"} Dec 03 13:55:43.750730 master-0 kubenswrapper[8988]: I1203 13:55:43.750636 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:43.984768 master-0 kubenswrapper[8988]: I1203 13:55:43.983478 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:44.025188 master-0 kubenswrapper[8988]: I1203 13:55:44.025104 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq"] Dec 03 13:55:44.026570 master-0 kubenswrapper[8988]: E1203 13:55:44.026242 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" podUID="c5dfe987-cfcc-4b70-bbd8-b267571f2618" Dec 03 13:55:44.296195 master-0 kubenswrapper[8988]: I1203 13:55:44.295487 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:44.296195 master-0 kubenswrapper[8988]: E1203 13:55:44.295808 8988 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Dec 03 13:55:44.296195 master-0 kubenswrapper[8988]: E1203 13:55:44.295987 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit podName:6498710c-a5d8-4465-b96f-93dc0db63d62 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:46.295951829 +0000 UTC m=+47.484020282 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit") pod "apiserver-7c895b7864-fxr2k" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62") : configmap "audit-0" not found Dec 03 13:55:44.747372 master-0 kubenswrapper[8988]: I1203 13:55:44.745913 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:44.819300 master-0 kubenswrapper[8988]: I1203 13:55:44.810378 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:44.919486 master-0 kubenswrapper[8988]: I1203 13:55:44.917932 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config\") pod \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " Dec 03 13:55:44.919486 master-0 kubenswrapper[8988]: I1203 13:55:44.918038 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca\") pod \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " Dec 03 13:55:44.919486 master-0 kubenswrapper[8988]: I1203 13:55:44.918144 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h9lp\" (UniqueName: \"kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp\") pod \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " Dec 03 13:55:44.939698 master-0 kubenswrapper[8988]: I1203 13:55:44.928984 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca" (OuterVolumeSpecName: "client-ca") pod "c5dfe987-cfcc-4b70-bbd8-b267571f2618" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:44.940485 master-0 kubenswrapper[8988]: I1203 13:55:44.940382 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config" (OuterVolumeSpecName: "config") pod "c5dfe987-cfcc-4b70-bbd8-b267571f2618" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:44.944047 master-0 kubenswrapper[8988]: I1203 13:55:44.943724 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp" (OuterVolumeSpecName: "kube-api-access-8h9lp") pod "c5dfe987-cfcc-4b70-bbd8-b267571f2618" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618"). InnerVolumeSpecName "kube-api-access-8h9lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:45.020827 master-0 kubenswrapper[8988]: I1203 13:55:45.020734 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h9lp\" (UniqueName: \"kubernetes.io/projected/c5dfe987-cfcc-4b70-bbd8-b267571f2618-kube-api-access-8h9lp\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:45.020827 master-0 kubenswrapper[8988]: I1203 13:55:45.020817 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:45.021344 master-0 kubenswrapper[8988]: I1203 13:55:45.021048 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c5dfe987-cfcc-4b70-bbd8-b267571f2618-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:45.049392 master-0 kubenswrapper[8988]: I1203 13:55:45.049043 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:45.063602 master-0 kubenswrapper[8988]: W1203 13:55:45.063531 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod806ba31c_9e91_469f_9b47_556d22efb642.slice/crio-9818b68d6d1f75a6e900f40a24d8504fd6ca0af9f48684e1b0071adc654be2e7 WatchSource:0}: Error finding container 9818b68d6d1f75a6e900f40a24d8504fd6ca0af9f48684e1b0071adc654be2e7: Status 404 returned error can't find the container with id 9818b68d6d1f75a6e900f40a24d8504fd6ca0af9f48684e1b0071adc654be2e7 Dec 03 13:55:45.753306 master-0 kubenswrapper[8988]: I1203 13:55:45.752807 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"62e170df95a1c0ac2f850d72b2d416e6feeb1fc16efb14dae262ed12df7400ca"} Dec 03 13:55:45.756614 master-0 kubenswrapper[8988]: I1203 13:55:45.754745 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" event={"ID":"806ba31c-9e91-469f-9b47-556d22efb642","Type":"ContainerStarted","Data":"9818b68d6d1f75a6e900f40a24d8504fd6ca0af9f48684e1b0071adc654be2e7"} Dec 03 13:55:45.757461 master-0 kubenswrapper[8988]: I1203 13:55:45.757366 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:45.757461 master-0 kubenswrapper[8988]: I1203 13:55:45.757351 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" event={"ID":"ce26e464-9a7c-4b22-a2b4-03706b351455","Type":"ContainerStarted","Data":"12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539"} Dec 03 13:55:45.849291 master-0 kubenswrapper[8988]: I1203 13:55:45.841424 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:46.340854 master-0 kubenswrapper[8988]: I1203 13:55:46.340780 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") pod \"apiserver-7c895b7864-fxr2k\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:46.341166 master-0 kubenswrapper[8988]: E1203 13:55:46.340916 8988 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Dec 03 13:55:46.341166 master-0 kubenswrapper[8988]: E1203 13:55:46.341150 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit podName:6498710c-a5d8-4465-b96f-93dc0db63d62 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:50.34112082 +0000 UTC m=+51.529189103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit") pod "apiserver-7c895b7864-fxr2k" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62") : configmap "audit-0" not found Dec 03 13:55:46.689162 master-0 kubenswrapper[8988]: I1203 13:55:46.685792 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-754cfd84-qf898"] Dec 03 13:55:46.690769 master-0 kubenswrapper[8988]: I1203 13:55:46.690472 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.690769 master-0 kubenswrapper[8988]: I1203 13:55:46.690565 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podStartSLOduration=3.627067096 podStartE2EDuration="20.690511985s" podCreationTimestamp="2025-12-03 13:55:26 +0000 UTC" firstStartedPulling="2025-12-03 13:55:27.568738313 +0000 UTC m=+28.756806596" lastFinishedPulling="2025-12-03 13:55:44.632183202 +0000 UTC m=+45.820251485" observedRunningTime="2025-12-03 13:55:46.683604824 +0000 UTC m=+47.871673127" watchObservedRunningTime="2025-12-03 13:55:46.690511985 +0000 UTC m=+47.878580268" Dec 03 13:55:46.698284 master-0 kubenswrapper[8988]: I1203 13:55:46.695484 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 13:55:46.698284 master-0 kubenswrapper[8988]: I1203 13:55:46.695847 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 13:55:46.698284 master-0 kubenswrapper[8988]: I1203 13:55:46.695888 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 13:55:46.700939 master-0 kubenswrapper[8988]: I1203 13:55:46.699615 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 13:55:46.745092 master-0 kubenswrapper[8988]: I1203 13:55:46.744645 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") pod \"route-controller-manager-7ffbbcd969-mkclq\" (UID: \"c5dfe987-cfcc-4b70-bbd8-b267571f2618\") " pod="openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq" Dec 03 13:55:46.745092 master-0 kubenswrapper[8988]: E1203 13:55:46.744911 8988 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 13:55:46.745092 master-0 kubenswrapper[8988]: E1203 13:55:46.745047 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert podName:c5dfe987-cfcc-4b70-bbd8-b267571f2618 nodeName:}" failed. No retries permitted until 2025-12-03 13:56:02.745019761 +0000 UTC m=+63.933088234 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert") pod "route-controller-manager-7ffbbcd969-mkclq" (UID: "c5dfe987-cfcc-4b70-bbd8-b267571f2618") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 13:55:46.778858 master-0 kubenswrapper[8988]: I1203 13:55:46.771729 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-754cfd84-qf898"] Dec 03 13:55:46.779863 master-0 kubenswrapper[8988]: I1203 13:55:46.778998 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"51ff18d8-5b58-4f9b-b20d-13c38531dfc9","Type":"ContainerStarted","Data":"b61bebb3f01371606a62b7056588d7fc801a5a8d2bc2c3b9387f7fcc593a8e79"} Dec 03 13:55:46.779863 master-0 kubenswrapper[8988]: I1203 13:55:46.779226 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" containerName="installer" containerID="cri-o://b61bebb3f01371606a62b7056588d7fc801a5a8d2bc2c3b9387f7fcc593a8e79" gracePeriod=30 Dec 03 13:55:46.800101 master-0 kubenswrapper[8988]: I1203 13:55:46.800012 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"31246afa1a33fa8a661087dcba45abf95e0aa5ec8f80d510b8923243105cc3d6"} Dec 03 13:55:46.800101 master-0 kubenswrapper[8988]: I1203 13:55:46.800078 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"b8acb9e1f99c70fbeaa30bb6e4bedfc432c780e2300c2633ef0d92e3b3ccf9ed"} Dec 03 13:55:46.845754 master-0 kubenswrapper[8988]: I1203 13:55:46.845652 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.845754 master-0 kubenswrapper[8988]: I1203 13:55:46.845719 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.846212 master-0 kubenswrapper[8988]: I1203 13:55:46.845820 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.846547 master-0 kubenswrapper[8988]: I1203 13:55:46.846239 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.846614 master-0 kubenswrapper[8988]: I1203 13:55:46.846562 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.846691 master-0 kubenswrapper[8988]: I1203 13:55:46.846668 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.921397 master-0 kubenswrapper[8988]: I1203 13:55:46.920672 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:55:46.921397 master-0 kubenswrapper[8988]: I1203 13:55:46.921248 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:46.926305 master-0 kubenswrapper[8988]: I1203 13:55:46.923912 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:55:46.926305 master-0 kubenswrapper[8988]: I1203 13:55:46.924299 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:55:46.928433 master-0 kubenswrapper[8988]: I1203 13:55:46.928211 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:55:46.931291 master-0 kubenswrapper[8988]: I1203 13:55:46.928631 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:55:46.931492 master-0 kubenswrapper[8988]: I1203 13:55:46.931408 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:55:46.947924 master-0 kubenswrapper[8988]: I1203 13:55:46.947847 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.947924 master-0 kubenswrapper[8988]: I1203 13:55:46.947923 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.947967 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.948003 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.948023 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.948051 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.948141 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.948186 master-0 kubenswrapper[8988]: I1203 13:55:46.948165 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:46.948572 master-0 kubenswrapper[8988]: I1203 13:55:46.948200 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pncc2\" (UniqueName: \"kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:46.948572 master-0 kubenswrapper[8988]: I1203 13:55:46.948275 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.949610 master-0 kubenswrapper[8988]: I1203 13:55:46.949539 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.949780 master-0 kubenswrapper[8988]: E1203 13:55:46.949687 8988 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Dec 03 13:55:46.949916 master-0 kubenswrapper[8988]: E1203 13:55:46.949866 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:47.449825189 +0000 UTC m=+48.637893672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : secret "catalogserver-cert" not found Dec 03 13:55:46.950117 master-0 kubenswrapper[8988]: I1203 13:55:46.949979 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.950117 master-0 kubenswrapper[8988]: I1203 13:55:46.949990 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:46.958270 master-0 kubenswrapper[8988]: I1203 13:55:46.958179 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:47.020058 master-0 kubenswrapper[8988]: I1203 13:55:47.016512 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq"] Dec 03 13:55:47.023098 master-0 kubenswrapper[8988]: I1203 13:55:47.023023 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:55:47.049295 master-0 kubenswrapper[8988]: I1203 13:55:47.049082 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.049295 master-0 kubenswrapper[8988]: I1203 13:55:47.049185 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.049295 master-0 kubenswrapper[8988]: I1203 13:55:47.049250 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.049295 master-0 kubenswrapper[8988]: I1203 13:55:47.049311 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pncc2\" (UniqueName: \"kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.051918 master-0 kubenswrapper[8988]: I1203 13:55:47.051246 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.058363 master-0 kubenswrapper[8988]: I1203 13:55:47.053049 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.058363 master-0 kubenswrapper[8988]: I1203 13:55:47.056055 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.078272 master-0 kubenswrapper[8988]: I1203 13:55:47.078182 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ffbbcd969-mkclq"] Dec 03 13:55:47.099894 master-0 kubenswrapper[8988]: I1203 13:55:47.098115 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:47.141330 master-0 kubenswrapper[8988]: I1203 13:55:47.140503 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7c895b7864-fxr2k"] Dec 03 13:55:47.141330 master-0 kubenswrapper[8988]: E1203 13:55:47.140920 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" podUID="6498710c-a5d8-4465-b96f-93dc0db63d62" Dec 03 13:55:47.153350 master-0 kubenswrapper[8988]: I1203 13:55:47.151235 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5dfe987-cfcc-4b70-bbd8-b267571f2618-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.157591 master-0 kubenswrapper[8988]: I1203 13:55:47.155956 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pncc2\" (UniqueName: \"kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2\") pod \"route-controller-manager-66bd7f46c9-p8fcq\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.530875 master-0 kubenswrapper[8988]: I1203 13:55:47.530131 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:55:47.534361 master-0 kubenswrapper[8988]: I1203 13:55:47.532841 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:47.534361 master-0 kubenswrapper[8988]: E1203 13:55:47.533204 8988 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Dec 03 13:55:47.534361 master-0 kubenswrapper[8988]: E1203 13:55:47.533393 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:48.533372644 +0000 UTC m=+49.721440927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : secret "catalogserver-cert" not found Dec 03 13:55:47.672289 master-0 kubenswrapper[8988]: I1203 13:55:47.671701 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=17.671660869 podStartE2EDuration="17.671660869s" podCreationTimestamp="2025-12-03 13:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:47.648980159 +0000 UTC m=+48.837048452" watchObservedRunningTime="2025-12-03 13:55:47.671660869 +0000 UTC m=+48.859729152" Dec 03 13:55:47.745607 master-0 kubenswrapper[8988]: I1203 13:55:47.745401 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podStartSLOduration=6.925875968 podStartE2EDuration="18.745365503s" podCreationTimestamp="2025-12-03 13:55:29 +0000 UTC" firstStartedPulling="2025-12-03 13:55:33.132040157 +0000 UTC m=+34.320108440" lastFinishedPulling="2025-12-03 13:55:44.951529702 +0000 UTC m=+46.139597975" observedRunningTime="2025-12-03 13:55:47.74043305 +0000 UTC m=+48.928501353" watchObservedRunningTime="2025-12-03 13:55:47.745365503 +0000 UTC m=+48.933433786" Dec 03 13:55:47.811310 master-0 kubenswrapper[8988]: I1203 13:55:47.809574 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_51ff18d8-5b58-4f9b-b20d-13c38531dfc9/installer/0.log" Dec 03 13:55:47.811310 master-0 kubenswrapper[8988]: I1203 13:55:47.809650 8988 generic.go:334] "Generic (PLEG): container finished" podID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" containerID="b61bebb3f01371606a62b7056588d7fc801a5a8d2bc2c3b9387f7fcc593a8e79" exitCode=1 Dec 03 13:55:47.811310 master-0 kubenswrapper[8988]: I1203 13:55:47.809777 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"51ff18d8-5b58-4f9b-b20d-13c38531dfc9","Type":"ContainerDied","Data":"b61bebb3f01371606a62b7056588d7fc801a5a8d2bc2c3b9387f7fcc593a8e79"} Dec 03 13:55:47.815341 master-0 kubenswrapper[8988]: I1203 13:55:47.814245 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6","Type":"ContainerStarted","Data":"8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e"} Dec 03 13:55:47.815341 master-0 kubenswrapper[8988]: I1203 13:55:47.815006 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:47.838953 master-0 kubenswrapper[8988]: I1203 13:55:47.838899 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:47.841783 master-0 kubenswrapper[8988]: I1203 13:55:47.841723 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=17.841711186 podStartE2EDuration="17.841711186s" podCreationTimestamp="2025-12-03 13:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:47.840964685 +0000 UTC m=+49.029032988" watchObservedRunningTime="2025-12-03 13:55:47.841711186 +0000 UTC m=+49.029779479" Dec 03 13:55:47.876820 master-0 kubenswrapper[8988]: I1203 13:55:47.876746 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:55:47.941035 master-0 kubenswrapper[8988]: I1203 13:55:47.940963 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9mgz\" (UniqueName: \"kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941064 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941113 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941150 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941204 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941246 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941415 master-0 kubenswrapper[8988]: I1203 13:55:47.941355 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941630 master-0 kubenswrapper[8988]: I1203 13:55:47.941398 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941630 master-0 kubenswrapper[8988]: I1203 13:55:47.941499 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941630 master-0 kubenswrapper[8988]: I1203 13:55:47.941526 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client\") pod \"6498710c-a5d8-4465-b96f-93dc0db63d62\" (UID: \"6498710c-a5d8-4465-b96f-93dc0db63d62\") " Dec 03 13:55:47.941785 master-0 kubenswrapper[8988]: I1203 13:55:47.941717 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:47.942119 master-0 kubenswrapper[8988]: I1203 13:55:47.941839 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:47.942311 master-0 kubenswrapper[8988]: I1203 13:55:47.942232 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:47.942527 master-0 kubenswrapper[8988]: I1203 13:55:47.942496 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:47.943035 master-0 kubenswrapper[8988]: I1203 13:55:47.942993 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.943690 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config" (OuterVolumeSpecName: "config") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.943950 8988 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-audit-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.943983 8988 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.943997 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.944014 8988 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.944028 8988 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6498710c-a5d8-4465-b96f-93dc0db63d62-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.944105 master-0 kubenswrapper[8988]: I1203 13:55:47.944042 8988 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-image-import-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:47.982353 master-0 kubenswrapper[8988]: I1203 13:55:47.951594 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:55:47.982353 master-0 kubenswrapper[8988]: I1203 13:55:47.951928 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz" (OuterVolumeSpecName: "kube-api-access-q9mgz") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "kube-api-access-q9mgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:47.982353 master-0 kubenswrapper[8988]: I1203 13:55:47.953369 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:55:47.982353 master-0 kubenswrapper[8988]: I1203 13:55:47.981719 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "6498710c-a5d8-4465-b96f-93dc0db63d62" (UID: "6498710c-a5d8-4465-b96f-93dc0db63d62"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:55:48.045460 master-0 kubenswrapper[8988]: I1203 13:55:48.045275 8988 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-encryption-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:48.045460 master-0 kubenswrapper[8988]: I1203 13:55:48.045316 8988 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-etcd-client\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:48.045460 master-0 kubenswrapper[8988]: I1203 13:55:48.045355 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9mgz\" (UniqueName: \"kubernetes.io/projected/6498710c-a5d8-4465-b96f-93dc0db63d62-kube-api-access-q9mgz\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:48.045460 master-0 kubenswrapper[8988]: I1203 13:55:48.045372 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6498710c-a5d8-4465-b96f-93dc0db63d62-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:48.295132 master-0 kubenswrapper[8988]: I1203 13:55:48.293212 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw"] Dec 03 13:55:48.295132 master-0 kubenswrapper[8988]: I1203 13:55:48.294912 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.299711 master-0 kubenswrapper[8988]: I1203 13:55:48.299647 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 13:55:48.299829 master-0 kubenswrapper[8988]: I1203 13:55:48.299727 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 13:55:48.299885 master-0 kubenswrapper[8988]: I1203 13:55:48.299744 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 13:55:48.321869 master-0 kubenswrapper[8988]: I1203 13:55:48.321794 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw"] Dec 03 13:55:48.353152 master-0 kubenswrapper[8988]: I1203 13:55:48.353083 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.353481 master-0 kubenswrapper[8988]: I1203 13:55:48.353231 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.353481 master-0 kubenswrapper[8988]: I1203 13:55:48.353396 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.353481 master-0 kubenswrapper[8988]: I1203 13:55:48.353465 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.353622 master-0 kubenswrapper[8988]: I1203 13:55:48.353544 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.455372 master-0 kubenswrapper[8988]: I1203 13:55:48.455289 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.455643 master-0 kubenswrapper[8988]: I1203 13:55:48.455382 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.455643 master-0 kubenswrapper[8988]: I1203 13:55:48.455445 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.455643 master-0 kubenswrapper[8988]: I1203 13:55:48.455490 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.455643 master-0 kubenswrapper[8988]: I1203 13:55:48.455555 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.456331 master-0 kubenswrapper[8988]: I1203 13:55:48.456307 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.456431 master-0 kubenswrapper[8988]: I1203 13:55:48.456405 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.458399 master-0 kubenswrapper[8988]: E1203 13:55:48.456607 8988 projected.go:301] Couldn't get configMap payload openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap references non-existent config key: ca-bundle.crt Dec 03 13:55:48.458399 master-0 kubenswrapper[8988]: E1203 13:55:48.456701 8988 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: configmap references non-existent config key: ca-bundle.crt Dec 03 13:55:48.458399 master-0 kubenswrapper[8988]: E1203 13:55:48.456819 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 13:55:48.956788161 +0000 UTC m=+50.144856674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : configmap references non-existent config key: ca-bundle.crt Dec 03 13:55:48.458399 master-0 kubenswrapper[8988]: I1203 13:55:48.457000 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.560927 master-0 kubenswrapper[8988]: I1203 13:55:48.557648 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:48.571996 master-0 kubenswrapper[8988]: I1203 13:55:48.571887 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:55:48.572668 master-0 kubenswrapper[8988]: I1203 13:55:48.572621 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.576110 master-0 kubenswrapper[8988]: I1203 13:55:48.576050 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:48.588392 master-0 kubenswrapper[8988]: I1203 13:55:48.586520 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:55:48.588659 master-0 kubenswrapper[8988]: I1203 13:55:48.588607 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.659473 master-0 kubenswrapper[8988]: I1203 13:55:48.659376 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.659473 master-0 kubenswrapper[8988]: I1203 13:55:48.659485 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.659839 master-0 kubenswrapper[8988]: I1203 13:55:48.659517 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.760771 master-0 kubenswrapper[8988]: I1203 13:55:48.760686 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.761104 master-0 kubenswrapper[8988]: I1203 13:55:48.760801 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.761104 master-0 kubenswrapper[8988]: I1203 13:55:48.760830 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.761104 master-0 kubenswrapper[8988]: I1203 13:55:48.760935 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.761104 master-0 kubenswrapper[8988]: I1203 13:55:48.760941 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.794477 master-0 kubenswrapper[8988]: I1203 13:55:48.794412 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.820475 master-0 kubenswrapper[8988]: I1203 13:55:48.819451 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7c895b7864-fxr2k" Dec 03 13:55:48.827938 master-0 kubenswrapper[8988]: I1203 13:55:48.827878 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:48.884247 master-0 kubenswrapper[8988]: I1203 13:55:48.884152 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-6985f84b49-v9vlg"] Dec 03 13:55:48.885196 master-0 kubenswrapper[8988]: I1203 13:55:48.885149 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.890914 master-0 kubenswrapper[8988]: I1203 13:55:48.888747 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 13:55:48.891214 master-0 kubenswrapper[8988]: I1203 13:55:48.891111 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 13:55:48.891297 master-0 kubenswrapper[8988]: I1203 13:55:48.888837 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 13:55:48.891364 master-0 kubenswrapper[8988]: I1203 13:55:48.888899 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 13:55:48.891404 master-0 kubenswrapper[8988]: I1203 13:55:48.888989 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 13:55:48.891439 master-0 kubenswrapper[8988]: I1203 13:55:48.889101 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 13:55:48.891527 master-0 kubenswrapper[8988]: I1203 13:55:48.889913 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 13:55:48.891527 master-0 kubenswrapper[8988]: I1203 13:55:48.890021 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 13:55:48.891840 master-0 kubenswrapper[8988]: I1203 13:55:48.891750 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 13:55:48.902110 master-0 kubenswrapper[8988]: I1203 13:55:48.899484 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 13:55:48.902110 master-0 kubenswrapper[8988]: I1203 13:55:48.901972 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-7c895b7864-fxr2k"] Dec 03 13:55:48.925997 master-0 kubenswrapper[8988]: I1203 13:55:48.925921 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:48.963467 master-0 kubenswrapper[8988]: I1203 13:55:48.963368 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.963860 master-0 kubenswrapper[8988]: I1203 13:55:48.963764 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.963926 master-0 kubenswrapper[8988]: I1203 13:55:48.963860 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.963926 master-0 kubenswrapper[8988]: I1203 13:55:48.963918 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964023 master-0 kubenswrapper[8988]: I1203 13:55:48.963976 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:48.964023 master-0 kubenswrapper[8988]: I1203 13:55:48.963994 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964023 master-0 kubenswrapper[8988]: I1203 13:55:48.964014 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964729 master-0 kubenswrapper[8988]: I1203 13:55:48.964117 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964729 master-0 kubenswrapper[8988]: I1203 13:55:48.964135 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964729 master-0 kubenswrapper[8988]: I1203 13:55:48.964156 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964729 master-0 kubenswrapper[8988]: I1203 13:55:48.964174 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.964729 master-0 kubenswrapper[8988]: I1203 13:55:48.964194 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:48.968898 master-0 kubenswrapper[8988]: I1203 13:55:48.968829 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064755 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064818 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064849 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064873 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064897 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.064936 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.065019 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.065056 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.065100 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.065193 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.065242 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.067024 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.071679 master-0 kubenswrapper[8988]: I1203 13:55:49.071112 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.072555 master-0 kubenswrapper[8988]: I1203 13:55:49.072058 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.072555 master-0 kubenswrapper[8988]: I1203 13:55:49.072149 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5dfe987-cfcc-4b70-bbd8-b267571f2618" path="/var/lib/kubelet/pods/c5dfe987-cfcc-4b70-bbd8-b267571f2618/volumes" Dec 03 13:55:49.072555 master-0 kubenswrapper[8988]: I1203 13:55:49.072546 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6985f84b49-v9vlg"] Dec 03 13:55:49.072819 master-0 kubenswrapper[8988]: I1203 13:55:49.072639 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.073719 master-0 kubenswrapper[8988]: I1203 13:55:49.073641 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.075112 master-0 kubenswrapper[8988]: I1203 13:55:49.074528 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.075112 master-0 kubenswrapper[8988]: I1203 13:55:49.074708 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.075392 master-0 kubenswrapper[8988]: I1203 13:55:49.075191 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.075997 master-0 kubenswrapper[8988]: I1203 13:55:49.075901 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-7c895b7864-fxr2k"] Dec 03 13:55:49.078287 master-0 kubenswrapper[8988]: I1203 13:55:49.078039 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.079075 master-0 kubenswrapper[8988]: I1203 13:55:49.079007 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.105769 master-0 kubenswrapper[8988]: I1203 13:55:49.105677 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:49.166319 master-0 kubenswrapper[8988]: I1203 13:55:49.166195 8988 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6498710c-a5d8-4465-b96f-93dc0db63d62-audit\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:49.235676 master-0 kubenswrapper[8988]: I1203 13:55:49.235599 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:49.377123 master-0 kubenswrapper[8988]: I1203 13:55:49.376946 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:55:50.156876 master-0 kubenswrapper[8988]: I1203 13:55:50.156444 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:55:50.940415 master-0 kubenswrapper[8988]: I1203 13:55:50.940343 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_51ff18d8-5b58-4f9b-b20d-13c38531dfc9/installer/0.log" Dec 03 13:55:50.940676 master-0 kubenswrapper[8988]: I1203 13:55:50.940457 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:50.997769 master-0 kubenswrapper[8988]: I1203 13:55:50.997688 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir\") pod \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " Dec 03 13:55:50.998045 master-0 kubenswrapper[8988]: I1203 13:55:50.997795 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access\") pod \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " Dec 03 13:55:50.998045 master-0 kubenswrapper[8988]: I1203 13:55:50.997835 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock\") pod \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\" (UID: \"51ff18d8-5b58-4f9b-b20d-13c38531dfc9\") " Dec 03 13:55:50.998045 master-0 kubenswrapper[8988]: I1203 13:55:50.997895 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "51ff18d8-5b58-4f9b-b20d-13c38531dfc9" (UID: "51ff18d8-5b58-4f9b-b20d-13c38531dfc9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:50.998045 master-0 kubenswrapper[8988]: I1203 13:55:50.998041 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock" (OuterVolumeSpecName: "var-lock") pod "51ff18d8-5b58-4f9b-b20d-13c38531dfc9" (UID: "51ff18d8-5b58-4f9b-b20d-13c38531dfc9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:50.998228 master-0 kubenswrapper[8988]: I1203 13:55:50.998151 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:50.998228 master-0 kubenswrapper[8988]: I1203 13:55:50.998175 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:51.009970 master-0 kubenswrapper[8988]: I1203 13:55:51.009536 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "51ff18d8-5b58-4f9b-b20d-13c38531dfc9" (UID: "51ff18d8-5b58-4f9b-b20d-13c38531dfc9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:51.034459 master-0 kubenswrapper[8988]: I1203 13:55:51.034349 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6498710c-a5d8-4465-b96f-93dc0db63d62" path="/var/lib/kubelet/pods/6498710c-a5d8-4465-b96f-93dc0db63d62/volumes" Dec 03 13:55:51.099014 master-0 kubenswrapper[8988]: I1203 13:55:51.098931 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/51ff18d8-5b58-4f9b-b20d-13c38531dfc9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:51.858374 master-0 kubenswrapper[8988]: I1203 13:55:51.858232 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_51ff18d8-5b58-4f9b-b20d-13c38531dfc9/installer/0.log" Dec 03 13:55:51.858374 master-0 kubenswrapper[8988]: I1203 13:55:51.858360 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"51ff18d8-5b58-4f9b-b20d-13c38531dfc9","Type":"ContainerDied","Data":"3bf2735349828f87978eae4fdda4fa09b8cb53cd6a5f8697617c07d85ece0287"} Dec 03 13:55:51.859518 master-0 kubenswrapper[8988]: I1203 13:55:51.858475 8988 scope.go:117] "RemoveContainer" containerID="b61bebb3f01371606a62b7056588d7fc801a5a8d2bc2c3b9387f7fcc593a8e79" Dec 03 13:55:51.859518 master-0 kubenswrapper[8988]: I1203 13:55:51.858496 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Dec 03 13:55:54.902316 master-0 kubenswrapper[8988]: I1203 13:55:54.901552 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:54.937295 master-0 kubenswrapper[8988]: I1203 13:55:54.935157 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Dec 03 13:55:55.021305 master-0 kubenswrapper[8988]: I1203 13:55:55.019549 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:55:55.034305 master-0 kubenswrapper[8988]: I1203 13:55:55.028428 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" path="/var/lib/kubelet/pods/51ff18d8-5b58-4f9b-b20d-13c38531dfc9/volumes" Dec 03 13:55:56.577618 master-0 kubenswrapper[8988]: I1203 13:55:56.576091 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:55:56.605157 master-0 kubenswrapper[8988]: I1203 13:55:56.605101 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-754cfd84-qf898"] Dec 03 13:55:56.618213 master-0 kubenswrapper[8988]: W1203 13:55:56.618131 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69b752ed_691c_4574_a01e_428d4bf85b75.slice/crio-4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889 WatchSource:0}: Error finding container 4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889: Status 404 returned error can't find the container with id 4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889 Dec 03 13:55:56.672646 master-0 kubenswrapper[8988]: I1203 13:55:56.671865 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-6985f84b49-v9vlg"] Dec 03 13:55:56.895306 master-0 kubenswrapper[8988]: I1203 13:55:56.877837 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw"] Dec 03 13:55:56.896792 master-0 kubenswrapper[8988]: I1203 13:55:56.896736 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:55:56.924329 master-0 kubenswrapper[8988]: I1203 13:55:56.924214 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889"} Dec 03 13:55:56.929113 master-0 kubenswrapper[8988]: I1203 13:55:56.928679 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"92ac936b00521b1c9793237e1a0895eec6abe954c1fd85fa13ba44ec6da4fd3b"} Dec 03 13:55:56.929113 master-0 kubenswrapper[8988]: I1203 13:55:56.928710 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91"} Dec 03 13:55:56.939142 master-0 kubenswrapper[8988]: I1203 13:55:56.930238 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"24530698-b0b7-4075-893e-ea206720762e","Type":"ContainerStarted","Data":"baa76b72f2d92c3be922339b21b3ba40427be4021e14de7636305106e7fba745"} Dec 03 13:55:56.939142 master-0 kubenswrapper[8988]: I1203 13:55:56.932803 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"022c984796ffbc81ed2de2d84261fe1bf89204572d6040b93e26ccb33c39afb7"} Dec 03 13:55:56.939142 master-0 kubenswrapper[8988]: W1203 13:55:56.932848 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda528f2a3_5033_449c_b8d1_2317ecd02849.slice/crio-189a25dbd3d8ef610cd0fe885d8dd509b064860f60af936e5ccb32318db4324a WatchSource:0}: Error finding container 189a25dbd3d8ef610cd0fe885d8dd509b064860f60af936e5ccb32318db4324a: Status 404 returned error can't find the container with id 189a25dbd3d8ef610cd0fe885d8dd509b064860f60af936e5ccb32318db4324a Dec 03 13:55:56.942162 master-0 kubenswrapper[8988]: I1203 13:55:56.942123 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" event={"ID":"806ba31c-9e91-469f-9b47-556d22efb642","Type":"ContainerStarted","Data":"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0"} Dec 03 13:55:56.942767 master-0 kubenswrapper[8988]: I1203 13:55:56.942446 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" podUID="806ba31c-9e91-469f-9b47-556d22efb642" containerName="controller-manager" containerID="cri-o://56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0" gracePeriod=30 Dec 03 13:55:56.942767 master-0 kubenswrapper[8988]: I1203 13:55:56.943132 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:56.957529 master-0 kubenswrapper[8988]: I1203 13:55:56.957399 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:56.967232 master-0 kubenswrapper[8988]: I1203 13:55:56.967170 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"e3ebbbd0ee7ca51929de3beceebd50f7b813cf02ddcf6e89d22ba8b987cb3d6e"} Dec 03 13:55:56.975169 master-0 kubenswrapper[8988]: I1203 13:55:56.974355 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"f1e0d0c89fdbd56315b6d6adc3d91b7eb65288e3d043747b8f40d02fb2a90da3"} Dec 03 13:55:56.983676 master-0 kubenswrapper[8988]: I1203 13:55:56.979051 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"28c67c1281b6fbfb0446ab66907176e4746bcaec0c3dd47723e1e5adbfcd3d35"} Dec 03 13:55:57.080193 master-0 kubenswrapper[8988]: I1203 13:55:57.079934 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" podStartSLOduration=16.101658953 podStartE2EDuration="27.079894955s" podCreationTimestamp="2025-12-03 13:55:30 +0000 UTC" firstStartedPulling="2025-12-03 13:55:45.340956272 +0000 UTC m=+46.529024555" lastFinishedPulling="2025-12-03 13:55:56.319192274 +0000 UTC m=+57.507260557" observedRunningTime="2025-12-03 13:55:57.041609261 +0000 UTC m=+58.229677554" watchObservedRunningTime="2025-12-03 13:55:57.079894955 +0000 UTC m=+58.267963238" Dec 03 13:55:57.130832 master-0 kubenswrapper[8988]: I1203 13:55:57.126613 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-7zkbg"] Dec 03 13:55:57.130832 master-0 kubenswrapper[8988]: E1203 13:55:57.128235 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" containerName="installer" Dec 03 13:55:57.130832 master-0 kubenswrapper[8988]: I1203 13:55:57.128337 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" containerName="installer" Dec 03 13:55:57.130832 master-0 kubenswrapper[8988]: I1203 13:55:57.128498 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ff18d8-5b58-4f9b-b20d-13c38531dfc9" containerName="installer" Dec 03 13:55:57.130832 master-0 kubenswrapper[8988]: I1203 13:55:57.129216 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287007 master-0 kubenswrapper[8988]: I1203 13:55:57.286927 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287007 master-0 kubenswrapper[8988]: I1203 13:55:57.287015 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287042 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287174 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287205 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287226 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287252 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287309 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287344 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287371 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287507 master-0 kubenswrapper[8988]: I1203 13:55:57.287408 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287926 master-0 kubenswrapper[8988]: I1203 13:55:57.287527 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287926 master-0 kubenswrapper[8988]: I1203 13:55:57.287558 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.287926 master-0 kubenswrapper[8988]: I1203 13:55:57.287581 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.388777 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.388840 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.388903 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.388952 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.388991 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389015 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389057 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389098 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389120 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389144 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389165 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389185 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389211 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389242 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389366 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.389395 master-0 kubenswrapper[8988]: I1203 13:55:57.389415 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.390354 master-0 kubenswrapper[8988]: I1203 13:55:57.389854 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.390354 master-0 kubenswrapper[8988]: I1203 13:55:57.389890 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.390354 master-0 kubenswrapper[8988]: I1203 13:55:57.390034 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.391566 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.391680 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.391721 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.391713 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.391848 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.392373 master-0 kubenswrapper[8988]: I1203 13:55:57.392051 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.399551 master-0 kubenswrapper[8988]: I1203 13:55:57.399479 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.399872 master-0 kubenswrapper[8988]: I1203 13:55:57.399808 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.421471 master-0 kubenswrapper[8988]: I1203 13:55:57.421178 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.485902 master-0 kubenswrapper[8988]: I1203 13:55:57.485303 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:57.530364 master-0 kubenswrapper[8988]: I1203 13:55:57.529727 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:55:57.530364 master-0 kubenswrapper[8988]: E1203 13:55:57.529957 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806ba31c-9e91-469f-9b47-556d22efb642" containerName="controller-manager" Dec 03 13:55:57.530364 master-0 kubenswrapper[8988]: I1203 13:55:57.529975 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="806ba31c-9e91-469f-9b47-556d22efb642" containerName="controller-manager" Dec 03 13:55:57.530364 master-0 kubenswrapper[8988]: I1203 13:55:57.530062 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="806ba31c-9e91-469f-9b47-556d22efb642" containerName="controller-manager" Dec 03 13:55:57.530745 master-0 kubenswrapper[8988]: I1203 13:55:57.530468 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.538333 master-0 kubenswrapper[8988]: I1203 13:55:57.535643 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:55:57.546605 master-0 kubenswrapper[8988]: I1203 13:55:57.542647 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:55:57.591424 master-0 kubenswrapper[8988]: I1203 13:55:57.591364 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") pod \"806ba31c-9e91-469f-9b47-556d22efb642\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " Dec 03 13:55:57.592708 master-0 kubenswrapper[8988]: I1203 13:55:57.591431 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles\") pod \"806ba31c-9e91-469f-9b47-556d22efb642\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " Dec 03 13:55:57.592708 master-0 kubenswrapper[8988]: I1203 13:55:57.591470 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5wmw\" (UniqueName: \"kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw\") pod \"806ba31c-9e91-469f-9b47-556d22efb642\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " Dec 03 13:55:57.592708 master-0 kubenswrapper[8988]: I1203 13:55:57.591549 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca\") pod \"806ba31c-9e91-469f-9b47-556d22efb642\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " Dec 03 13:55:57.592708 master-0 kubenswrapper[8988]: I1203 13:55:57.591608 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config\") pod \"806ba31c-9e91-469f-9b47-556d22efb642\" (UID: \"806ba31c-9e91-469f-9b47-556d22efb642\") " Dec 03 13:55:57.592708 master-0 kubenswrapper[8988]: I1203 13:55:57.592573 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "806ba31c-9e91-469f-9b47-556d22efb642" (UID: "806ba31c-9e91-469f-9b47-556d22efb642"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:57.597567 master-0 kubenswrapper[8988]: I1203 13:55:57.594167 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca" (OuterVolumeSpecName: "client-ca") pod "806ba31c-9e91-469f-9b47-556d22efb642" (UID: "806ba31c-9e91-469f-9b47-556d22efb642"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:57.597567 master-0 kubenswrapper[8988]: I1203 13:55:57.596069 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config" (OuterVolumeSpecName: "config") pod "806ba31c-9e91-469f-9b47-556d22efb642" (UID: "806ba31c-9e91-469f-9b47-556d22efb642"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:55:57.605226 master-0 kubenswrapper[8988]: I1203 13:55:57.605165 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5m4f8"] Dec 03 13:55:57.608461 master-0 kubenswrapper[8988]: I1203 13:55:57.606075 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.616392 master-0 kubenswrapper[8988]: I1203 13:55:57.609198 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "806ba31c-9e91-469f-9b47-556d22efb642" (UID: "806ba31c-9e91-469f-9b47-556d22efb642"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:55:57.616392 master-0 kubenswrapper[8988]: I1203 13:55:57.609902 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 13:55:57.616392 master-0 kubenswrapper[8988]: I1203 13:55:57.609902 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 13:55:57.616392 master-0 kubenswrapper[8988]: I1203 13:55:57.609993 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 13:55:57.616392 master-0 kubenswrapper[8988]: I1203 13:55:57.609988 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 13:55:57.625428 master-0 kubenswrapper[8988]: I1203 13:55:57.622179 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5m4f8"] Dec 03 13:55:57.625428 master-0 kubenswrapper[8988]: I1203 13:55:57.625373 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw" (OuterVolumeSpecName: "kube-api-access-c5wmw") pod "806ba31c-9e91-469f-9b47-556d22efb642" (UID: "806ba31c-9e91-469f-9b47-556d22efb642"). InnerVolumeSpecName "kube-api-access-c5wmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693210 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693293 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693315 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693382 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693451 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvqr7\" (UniqueName: \"kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693485 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693497 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/806ba31c-9e91-469f-9b47-556d22efb642-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693507 8988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693517 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5wmw\" (UniqueName: \"kubernetes.io/projected/806ba31c-9e91-469f-9b47-556d22efb642-kube-api-access-c5wmw\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:57.696301 master-0 kubenswrapper[8988]: I1203 13:55:57.693526 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/806ba31c-9e91-469f-9b47-556d22efb642-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794190 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvqr7\" (UniqueName: \"kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794251 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794320 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794351 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794373 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794424 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794447 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.794512 master-0 kubenswrapper[8988]: I1203 13:55:57.794468 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.797229 master-0 kubenswrapper[8988]: I1203 13:55:57.797169 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.798115 master-0 kubenswrapper[8988]: I1203 13:55:57.798062 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.798743 master-0 kubenswrapper[8988]: I1203 13:55:57.798684 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.805401 master-0 kubenswrapper[8988]: I1203 13:55:57.803425 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.823276 master-0 kubenswrapper[8988]: I1203 13:55:57.823154 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvqr7\" (UniqueName: \"kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7\") pod \"controller-manager-6c4bfbb4d5-77st9\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.823486 master-0 kubenswrapper[8988]: I1203 13:55:57.823334 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:55:57.824614 master-0 kubenswrapper[8988]: I1203 13:55:57.823952 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:57.833358 master-0 kubenswrapper[8988]: I1203 13:55:57.833251 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:55:57.868883 master-0 kubenswrapper[8988]: I1203 13:55:57.868464 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:57.897448 master-0 kubenswrapper[8988]: I1203 13:55:57.896893 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.897448 master-0 kubenswrapper[8988]: I1203 13:55:57.896998 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.897448 master-0 kubenswrapper[8988]: I1203 13:55:57.897060 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.897839 master-0 kubenswrapper[8988]: E1203 13:55:57.897474 8988 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Dec 03 13:55:57.897839 master-0 kubenswrapper[8988]: E1203 13:55:57.897532 8988 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 13:55:58.397517322 +0000 UTC m=+59.585585605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : secret "dns-default-metrics-tls" not found Dec 03 13:55:57.898192 master-0 kubenswrapper[8988]: I1203 13:55:57.898147 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:57.951325 master-0 kubenswrapper[8988]: I1203 13:55:57.951116 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:58.011957 master-0 kubenswrapper[8988]: I1203 13:55:58.011879 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.012250 master-0 kubenswrapper[8988]: I1203 13:55:58.011998 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.012250 master-0 kubenswrapper[8988]: I1203 13:55:58.012072 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.026148 master-0 kubenswrapper[8988]: I1203 13:55:58.026092 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_24530698-b0b7-4075-893e-ea206720762e/installer/0.log" Dec 03 13:55:58.026658 master-0 kubenswrapper[8988]: I1203 13:55:58.026161 8988 generic.go:334] "Generic (PLEG): container finished" podID="24530698-b0b7-4075-893e-ea206720762e" containerID="47abdfe4b12c0e201ed266b55477e1bb8702daa6c8f3719636ada9d6632b1f2e" exitCode=1 Dec 03 13:55:58.026658 master-0 kubenswrapper[8988]: I1203 13:55:58.026225 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"24530698-b0b7-4075-893e-ea206720762e","Type":"ContainerDied","Data":"47abdfe4b12c0e201ed266b55477e1bb8702daa6c8f3719636ada9d6632b1f2e"} Dec 03 13:55:58.042728 master-0 kubenswrapper[8988]: I1203 13:55:58.042602 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"918c84c43b8f6701690ab8b0d2bb05915c1232978c8cc88ed437a5bf950e3857"} Dec 03 13:55:58.049157 master-0 kubenswrapper[8988]: I1203 13:55:58.048617 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"4adef59d3960147ca00f8952091c8a360b45cd338479ba640ccbc408b0879c0b"} Dec 03 13:55:58.049157 master-0 kubenswrapper[8988]: I1203 13:55:58.048681 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"4506b38022818f54a5d2b33dedf681d01c3dcc5e8d6addc6aae4656910332cda"} Dec 03 13:55:58.051506 master-0 kubenswrapper[8988]: I1203 13:55:58.051447 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"40d8eb82545e74f2af7ecc933f815e993ab5d8906f06f08f520dc7dcf35e0ae7"} Dec 03 13:55:58.051618 master-0 kubenswrapper[8988]: I1203 13:55:58.051514 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"5541dbd90d88c32e2769944667fd4132f8eacd3305e658da6b80b11593e7e91f"} Dec 03 13:55:58.052190 master-0 kubenswrapper[8988]: I1203 13:55:58.052151 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:55:58.064079 master-0 kubenswrapper[8988]: I1203 13:55:58.063989 8988 generic.go:334] "Generic (PLEG): container finished" podID="806ba31c-9e91-469f-9b47-556d22efb642" containerID="56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0" exitCode=0 Dec 03 13:55:58.064304 master-0 kubenswrapper[8988]: I1203 13:55:58.064183 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" event={"ID":"806ba31c-9e91-469f-9b47-556d22efb642","Type":"ContainerDied","Data":"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0"} Dec 03 13:55:58.064304 master-0 kubenswrapper[8988]: I1203 13:55:58.064240 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" event={"ID":"806ba31c-9e91-469f-9b47-556d22efb642","Type":"ContainerDied","Data":"9818b68d6d1f75a6e900f40a24d8504fd6ca0af9f48684e1b0071adc654be2e7"} Dec 03 13:55:58.064304 master-0 kubenswrapper[8988]: I1203 13:55:58.064295 8988 scope.go:117] "RemoveContainer" containerID="56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0" Dec 03 13:55:58.064500 master-0 kubenswrapper[8988]: I1203 13:55:58.064460 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c79f444f7-8rmss" Dec 03 13:55:58.067397 master-0 kubenswrapper[8988]: I1203 13:55:58.067355 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" event={"ID":"a528f2a3-5033-449c-b8d1-2317ecd02849","Type":"ContainerStarted","Data":"189a25dbd3d8ef610cd0fe885d8dd509b064860f60af936e5ccb32318db4324a"} Dec 03 13:55:58.072316 master-0 kubenswrapper[8988]: I1203 13:55:58.072254 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"bd1e1339c2b2a6cbfa32e7380e633e27308d98ec274ac7883938bbf44216022a"} Dec 03 13:55:58.072397 master-0 kubenswrapper[8988]: I1203 13:55:58.072323 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"cc16518683a304fcd778f593df3b44a196725927cbbdb6bf9c7b8406e574f8da"} Dec 03 13:55:58.072397 master-0 kubenswrapper[8988]: I1203 13:55:58.072347 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:55:58.072397 master-0 kubenswrapper[8988]: I1203 13:55:58.072362 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"588f3f48138c2b3392a4eae817dfb25b4a6dd6a9f3ecf65d5033e45b842a15ed"} Dec 03 13:55:58.085839 master-0 kubenswrapper[8988]: I1203 13:55:58.085780 8988 scope.go:117] "RemoveContainer" containerID="56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0" Dec 03 13:55:58.093377 master-0 kubenswrapper[8988]: E1203 13:55:58.089963 8988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0\": container with ID starting with 56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0 not found: ID does not exist" containerID="56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0" Dec 03 13:55:58.093377 master-0 kubenswrapper[8988]: I1203 13:55:58.090048 8988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0"} err="failed to get container status \"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0\": rpc error: code = NotFound desc = could not find container \"56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0\": container with ID starting with 56d0b830c26fea53953d4d331af1aa64823caa086cb8a31887dda4957833aad0 not found: ID does not exist" Dec 03 13:55:58.114764 master-0 kubenswrapper[8988]: I1203 13:55:58.112695 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.114764 master-0 kubenswrapper[8988]: I1203 13:55:58.113420 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.114764 master-0 kubenswrapper[8988]: I1203 13:55:58.113518 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.116472 master-0 kubenswrapper[8988]: I1203 13:55:58.115769 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.116472 master-0 kubenswrapper[8988]: I1203 13:55:58.115988 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.239922 master-0 kubenswrapper[8988]: I1203 13:55:58.237939 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:55:58.251278 master-0 kubenswrapper[8988]: I1203 13:55:58.248474 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access\") pod \"installer-3-master-0\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.292353 master-0 kubenswrapper[8988]: I1203 13:55:58.288967 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podStartSLOduration=10.288912359 podStartE2EDuration="10.288912359s" podCreationTimestamp="2025-12-03 13:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:58.278405013 +0000 UTC m=+59.466473296" watchObservedRunningTime="2025-12-03 13:55:58.288912359 +0000 UTC m=+59.476980652" Dec 03 13:55:58.325673 master-0 kubenswrapper[8988]: I1203 13:55:58.325585 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podStartSLOduration=13.325550484 podStartE2EDuration="13.325550484s" podCreationTimestamp="2025-12-03 13:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:58.325126682 +0000 UTC m=+59.513194995" watchObservedRunningTime="2025-12-03 13:55:58.325550484 +0000 UTC m=+59.513618787" Dec 03 13:55:58.349294 master-0 kubenswrapper[8988]: I1203 13:55:58.346305 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_24530698-b0b7-4075-893e-ea206720762e/installer/0.log" Dec 03 13:55:58.349294 master-0 kubenswrapper[8988]: I1203 13:55:58.346431 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:58.359474 master-0 kubenswrapper[8988]: I1203 13:55:58.356625 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4xlhs"] Dec 03 13:55:58.359474 master-0 kubenswrapper[8988]: E1203 13:55:58.356867 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24530698-b0b7-4075-893e-ea206720762e" containerName="installer" Dec 03 13:55:58.359474 master-0 kubenswrapper[8988]: I1203 13:55:58.356881 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="24530698-b0b7-4075-893e-ea206720762e" containerName="installer" Dec 03 13:55:58.359474 master-0 kubenswrapper[8988]: I1203 13:55:58.356995 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="24530698-b0b7-4075-893e-ea206720762e" containerName="installer" Dec 03 13:55:58.359474 master-0 kubenswrapper[8988]: I1203 13:55:58.358447 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.363291 master-0 kubenswrapper[8988]: I1203 13:55:58.363152 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:58.376886 master-0 kubenswrapper[8988]: I1203 13:55:58.376821 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c79f444f7-8rmss"] Dec 03 13:55:58.402796 master-0 kubenswrapper[8988]: I1203 13:55:58.400096 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" podStartSLOduration=1.400062742 podStartE2EDuration="1.400062742s" podCreationTimestamp="2025-12-03 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:58.397341003 +0000 UTC m=+59.585409296" watchObservedRunningTime="2025-12-03 13:55:58.400062742 +0000 UTC m=+59.588131025" Dec 03 13:55:58.430875 master-0 kubenswrapper[8988]: I1203 13:55:58.430791 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:58.449075 master-0 kubenswrapper[8988]: I1203 13:55:58.448870 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:58.461898 master-0 kubenswrapper[8988]: I1203 13:55:58.461828 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:55:58.542168 master-0 kubenswrapper[8988]: I1203 13:55:58.542105 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock\") pod \"24530698-b0b7-4075-893e-ea206720762e\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " Dec 03 13:55:58.542168 master-0 kubenswrapper[8988]: I1203 13:55:58.542176 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir\") pod \"24530698-b0b7-4075-893e-ea206720762e\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " Dec 03 13:55:58.542537 master-0 kubenswrapper[8988]: I1203 13:55:58.542221 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access\") pod \"24530698-b0b7-4075-893e-ea206720762e\" (UID: \"24530698-b0b7-4075-893e-ea206720762e\") " Dec 03 13:55:58.542537 master-0 kubenswrapper[8988]: I1203 13:55:58.542422 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.542537 master-0 kubenswrapper[8988]: I1203 13:55:58.542494 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.542673 master-0 kubenswrapper[8988]: I1203 13:55:58.542650 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock" (OuterVolumeSpecName: "var-lock") pod "24530698-b0b7-4075-893e-ea206720762e" (UID: "24530698-b0b7-4075-893e-ea206720762e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:58.542767 master-0 kubenswrapper[8988]: I1203 13:55:58.542678 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24530698-b0b7-4075-893e-ea206720762e" (UID: "24530698-b0b7-4075-893e-ea206720762e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:55:58.549114 master-0 kubenswrapper[8988]: I1203 13:55:58.549017 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24530698-b0b7-4075-893e-ea206720762e" (UID: "24530698-b0b7-4075-893e-ea206720762e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648109 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648136 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648205 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648271 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648286 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24530698-b0b7-4075-893e-ea206720762e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648300 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24530698-b0b7-4075-893e-ea206720762e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:55:58.654344 master-0 kubenswrapper[8988]: I1203 13:55:58.648526 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.679950 master-0 kubenswrapper[8988]: I1203 13:55:58.677146 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.735459 master-0 kubenswrapper[8988]: I1203 13:55:58.734329 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:55:58.831021 master-0 kubenswrapper[8988]: I1203 13:55:58.830877 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:55:58.913198 master-0 kubenswrapper[8988]: I1203 13:55:58.913107 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5m4f8"] Dec 03 13:55:58.942691 master-0 kubenswrapper[8988]: W1203 13:55:58.942585 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4669137a_fbc4_41e1_8eeb_5f06b9da2641.slice/crio-90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2 WatchSource:0}: Error finding container 90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2: Status 404 returned error can't find the container with id 90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2 Dec 03 13:55:59.049738 master-0 kubenswrapper[8988]: I1203 13:55:59.047245 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806ba31c-9e91-469f-9b47-556d22efb642" path="/var/lib/kubelet/pods/806ba31c-9e91-469f-9b47-556d22efb642/volumes" Dec 03 13:55:59.106621 master-0 kubenswrapper[8988]: I1203 13:55:59.103468 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"0056326a-6d5e-47cd-b350-c5c6287cfe2a","Type":"ContainerStarted","Data":"7b6c6e3dccf66eea8dc77784a207d387c88a1f16890ee62b88e320670dcc2abd"} Dec 03 13:55:59.125297 master-0 kubenswrapper[8988]: I1203 13:55:59.123922 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" event={"ID":"90b676ff-cfc1-4760-bd3d-d88c1f47053f","Type":"ContainerStarted","Data":"012817d7e2f1bdd60d981d2157b0ef8b7e886395ea41883c9548a2bf3f0ac828"} Dec 03 13:55:59.125297 master-0 kubenswrapper[8988]: I1203 13:55:59.124007 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" event={"ID":"90b676ff-cfc1-4760-bd3d-d88c1f47053f","Type":"ContainerStarted","Data":"60a506b173450d9c09057ffad94fd85c489f76103229f978d66bca32dc4aab51"} Dec 03 13:55:59.125297 master-0 kubenswrapper[8988]: I1203 13:55:59.124360 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:59.126577 master-0 kubenswrapper[8988]: I1203 13:55:59.126525 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_24530698-b0b7-4075-893e-ea206720762e/installer/0.log" Dec 03 13:55:59.126780 master-0 kubenswrapper[8988]: I1203 13:55:59.126713 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Dec 03 13:55:59.128375 master-0 kubenswrapper[8988]: I1203 13:55:59.126962 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"24530698-b0b7-4075-893e-ea206720762e","Type":"ContainerDied","Data":"baa76b72f2d92c3be922339b21b3ba40427be4021e14de7636305106e7fba745"} Dec 03 13:55:59.128375 master-0 kubenswrapper[8988]: I1203 13:55:59.127039 8988 scope.go:117] "RemoveContainer" containerID="47abdfe4b12c0e201ed266b55477e1bb8702daa6c8f3719636ada9d6632b1f2e" Dec 03 13:55:59.132786 master-0 kubenswrapper[8988]: I1203 13:55:59.132717 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:55:59.133886 master-0 kubenswrapper[8988]: I1203 13:55:59.133813 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"636d93d2bc5d6274a68744e7bb8286da893d7e599b6de981210f2789cc0fd2da"} Dec 03 13:55:59.142983 master-0 kubenswrapper[8988]: I1203 13:55:59.142919 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2"} Dec 03 13:55:59.230826 master-0 kubenswrapper[8988]: I1203 13:55:59.229548 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" podStartSLOduration=10.229524994 podStartE2EDuration="10.229524994s" podCreationTimestamp="2025-12-03 13:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:55:59.22939558 +0000 UTC m=+60.417463883" watchObservedRunningTime="2025-12-03 13:55:59.229524994 +0000 UTC m=+60.417593287" Dec 03 13:55:59.251932 master-0 kubenswrapper[8988]: I1203 13:55:59.251858 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:55:59.266474 master-0 kubenswrapper[8988]: I1203 13:55:59.266360 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Dec 03 13:56:00.152567 master-0 kubenswrapper[8988]: I1203 13:56:00.151896 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"433b3fa5673e195032b56a487e1f362fc9d8cf480bbfba0ea3d9503f78f0235a"} Dec 03 13:56:00.159004 master-0 kubenswrapper[8988]: I1203 13:56:00.158887 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"0056326a-6d5e-47cd-b350-c5c6287cfe2a","Type":"ContainerStarted","Data":"aec082cc889dc6b0c5cb64df67295d0aaf07a03ff02ca780819a5ae5d89f24aa"} Dec 03 13:56:00.193410 master-0 kubenswrapper[8988]: I1203 13:56:00.190858 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4xlhs" podStartSLOduration=2.190830272 podStartE2EDuration="2.190830272s" podCreationTimestamp="2025-12-03 13:55:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:00.173107356 +0000 UTC m=+61.361175659" watchObservedRunningTime="2025-12-03 13:56:00.190830272 +0000 UTC m=+61.378898555" Dec 03 13:56:00.219539 master-0 kubenswrapper[8988]: I1203 13:56:00.218381 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.21827361 podStartE2EDuration="3.21827361s" podCreationTimestamp="2025-12-03 13:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:00.204883471 +0000 UTC m=+61.392951774" watchObservedRunningTime="2025-12-03 13:56:00.21827361 +0000 UTC m=+61.406341913" Dec 03 13:56:01.033062 master-0 kubenswrapper[8988]: I1203 13:56:01.032972 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24530698-b0b7-4075-893e-ea206720762e" path="/var/lib/kubelet/pods/24530698-b0b7-4075-893e-ea206720762e/volumes" Dec 03 13:56:01.163904 master-0 kubenswrapper[8988]: I1203 13:56:01.163850 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-869c786959-vrvwt"] Dec 03 13:56:01.166075 master-0 kubenswrapper[8988]: I1203 13:56:01.164078 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" containerID="cri-o://12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539" gracePeriod=130 Dec 03 13:56:02.172823 master-0 kubenswrapper[8988]: I1203 13:56:02.172735 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" event={"ID":"ce26e464-9a7c-4b22-a2b4-03706b351455","Type":"ContainerDied","Data":"12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539"} Dec 03 13:56:02.172823 master-0 kubenswrapper[8988]: I1203 13:56:02.172800 8988 generic.go:334] "Generic (PLEG): container finished" podID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerID="12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539" exitCode=0 Dec 03 13:56:02.220500 master-0 kubenswrapper[8988]: I1203 13:56:02.217375 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Dec 03 13:56:02.220500 master-0 kubenswrapper[8988]: I1203 13:56:02.217930 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.221296 master-0 kubenswrapper[8988]: I1203 13:56:02.221195 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 13:56:02.235293 master-0 kubenswrapper[8988]: I1203 13:56:02.233488 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Dec 03 13:56:02.289415 master-0 kubenswrapper[8988]: I1203 13:56:02.289332 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.289415 master-0 kubenswrapper[8988]: I1203 13:56:02.289403 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.289833 master-0 kubenswrapper[8988]: I1203 13:56:02.289537 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.390769 master-0 kubenswrapper[8988]: I1203 13:56:02.390468 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.391104 master-0 kubenswrapper[8988]: I1203 13:56:02.390801 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.391104 master-0 kubenswrapper[8988]: I1203 13:56:02.390845 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.391104 master-0 kubenswrapper[8988]: I1203 13:56:02.390979 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.391104 master-0 kubenswrapper[8988]: I1203 13:56:02.391030 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.420614 master-0 kubenswrapper[8988]: I1203 13:56:02.420545 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:02.556329 master-0 kubenswrapper[8988]: I1203 13:56:02.556240 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:03.481596 master-0 kubenswrapper[8988]: I1203 13:56:03.481495 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:56:03.482339 master-0 kubenswrapper[8988]: I1203 13:56:03.481825 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" podUID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" containerName="controller-manager" containerID="cri-o://012817d7e2f1bdd60d981d2157b0ef8b7e886395ea41883c9548a2bf3f0ac828" gracePeriod=30 Dec 03 13:56:04.055367 master-0 kubenswrapper[8988]: I1203 13:56:04.055275 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:56:04.055367 master-0 kubenswrapper[8988]: I1203 13:56:04.055367 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:56:04.055770 master-0 kubenswrapper[8988]: I1203 13:56:04.055418 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:56:04.055770 master-0 kubenswrapper[8988]: I1203 13:56:04.055460 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:56:04.055770 master-0 kubenswrapper[8988]: I1203 13:56:04.055513 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:56:04.059705 master-0 kubenswrapper[8988]: I1203 13:56:04.059649 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"multus-admission-controller-78ddcf56f9-8l84w\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:56:04.059904 master-0 kubenswrapper[8988]: I1203 13:56:04.059802 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:56:04.060289 master-0 kubenswrapper[8988]: I1203 13:56:04.060227 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:56:04.060957 master-0 kubenswrapper[8988]: I1203 13:56:04.060889 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:56:04.062216 master-0 kubenswrapper[8988]: I1203 13:56:04.062153 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:56:04.148358 master-0 kubenswrapper[8988]: I1203 13:56:04.148241 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:56:04.148358 master-0 kubenswrapper[8988]: I1203 13:56:04.148248 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:56:04.148999 master-0 kubenswrapper[8988]: I1203 13:56:04.148960 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:56:04.153677 master-0 kubenswrapper[8988]: I1203 13:56:04.153635 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:56:04.157909 master-0 kubenswrapper[8988]: I1203 13:56:04.157847 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:56:04.186554 master-0 kubenswrapper[8988]: I1203 13:56:04.186458 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" event={"ID":"ce26e464-9a7c-4b22-a2b4-03706b351455","Type":"ContainerDied","Data":"7fb4e2d334a547fbeaaea1fa9c53c41549464da1350be876ed579d7818ec2701"} Dec 03 13:56:04.186554 master-0 kubenswrapper[8988]: I1203 13:56:04.186552 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb4e2d334a547fbeaaea1fa9c53c41549464da1350be876ed579d7818ec2701" Dec 03 13:56:04.190057 master-0 kubenswrapper[8988]: I1203 13:56:04.189980 8988 generic.go:334] "Generic (PLEG): container finished" podID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" containerID="012817d7e2f1bdd60d981d2157b0ef8b7e886395ea41883c9548a2bf3f0ac828" exitCode=0 Dec 03 13:56:04.190164 master-0 kubenswrapper[8988]: I1203 13:56:04.190065 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" event={"ID":"90b676ff-cfc1-4760-bd3d-d88c1f47053f","Type":"ContainerDied","Data":"012817d7e2f1bdd60d981d2157b0ef8b7e886395ea41883c9548a2bf3f0ac828"} Dec 03 13:56:04.192633 master-0 kubenswrapper[8988]: I1203 13:56:04.192586 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:56:04.259111 master-0 kubenswrapper[8988]: I1203 13:56:04.259064 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") pod \"ce26e464-9a7c-4b22-a2b4-03706b351455\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " Dec 03 13:56:04.259228 master-0 kubenswrapper[8988]: I1203 13:56:04.259143 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") pod \"ce26e464-9a7c-4b22-a2b4-03706b351455\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " Dec 03 13:56:04.269378 master-0 kubenswrapper[8988]: I1203 13:56:04.269299 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ce26e464-9a7c-4b22-a2b4-03706b351455" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:04.269642 master-0 kubenswrapper[8988]: I1203 13:56:04.269485 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce26e464-9a7c-4b22-a2b4-03706b351455" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:56:04.361812 master-0 kubenswrapper[8988]: I1203 13:56:04.361753 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") pod \"ce26e464-9a7c-4b22-a2b4-03706b351455\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " Dec 03 13:56:04.361812 master-0 kubenswrapper[8988]: I1203 13:56:04.361813 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") pod \"ce26e464-9a7c-4b22-a2b4-03706b351455\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " Dec 03 13:56:04.362034 master-0 kubenswrapper[8988]: I1203 13:56:04.361901 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") pod \"ce26e464-9a7c-4b22-a2b4-03706b351455\" (UID: \"ce26e464-9a7c-4b22-a2b4-03706b351455\") " Dec 03 13:56:04.362136 master-0 kubenswrapper[8988]: I1203 13:56:04.362113 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce26e464-9a7c-4b22-a2b4-03706b351455-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.362185 master-0 kubenswrapper[8988]: I1203 13:56:04.362137 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce26e464-9a7c-4b22-a2b4-03706b351455-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.362218 master-0 kubenswrapper[8988]: I1203 13:56:04.362197 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "ce26e464-9a7c-4b22-a2b4-03706b351455" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:04.365454 master-0 kubenswrapper[8988]: I1203 13:56:04.363795 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "ce26e464-9a7c-4b22-a2b4-03706b351455" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:04.365454 master-0 kubenswrapper[8988]: I1203 13:56:04.364741 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca" (OuterVolumeSpecName: "service-ca") pod "ce26e464-9a7c-4b22-a2b4-03706b351455" (UID: "ce26e464-9a7c-4b22-a2b4-03706b351455"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:04.453562 master-0 kubenswrapper[8988]: I1203 13:56:04.453513 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:56:04.463313 master-0 kubenswrapper[8988]: I1203 13:56:04.463229 8988 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ce26e464-9a7c-4b22-a2b4-03706b351455-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.463313 master-0 kubenswrapper[8988]: I1203 13:56:04.463310 8988 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.463467 master-0 kubenswrapper[8988]: I1203 13:56:04.463329 8988 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ce26e464-9a7c-4b22-a2b4-03706b351455-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.564292 master-0 kubenswrapper[8988]: I1203 13:56:04.564041 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles\") pod \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " Dec 03 13:56:04.564292 master-0 kubenswrapper[8988]: I1203 13:56:04.564137 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config\") pod \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " Dec 03 13:56:04.564292 master-0 kubenswrapper[8988]: I1203 13:56:04.564201 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert\") pod \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " Dec 03 13:56:04.564292 master-0 kubenswrapper[8988]: I1203 13:56:04.564291 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvqr7\" (UniqueName: \"kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7\") pod \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " Dec 03 13:56:04.564292 master-0 kubenswrapper[8988]: I1203 13:56:04.564323 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca\") pod \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\" (UID: \"90b676ff-cfc1-4760-bd3d-d88c1f47053f\") " Dec 03 13:56:04.565819 master-0 kubenswrapper[8988]: I1203 13:56:04.565092 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca" (OuterVolumeSpecName: "client-ca") pod "90b676ff-cfc1-4760-bd3d-d88c1f47053f" (UID: "90b676ff-cfc1-4760-bd3d-d88c1f47053f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:04.566061 master-0 kubenswrapper[8988]: I1203 13:56:04.565903 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "90b676ff-cfc1-4760-bd3d-d88c1f47053f" (UID: "90b676ff-cfc1-4760-bd3d-d88c1f47053f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:04.570617 master-0 kubenswrapper[8988]: I1203 13:56:04.569283 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config" (OuterVolumeSpecName: "config") pod "90b676ff-cfc1-4760-bd3d-d88c1f47053f" (UID: "90b676ff-cfc1-4760-bd3d-d88c1f47053f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:04.582463 master-0 kubenswrapper[8988]: I1203 13:56:04.582294 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7" (OuterVolumeSpecName: "kube-api-access-vvqr7") pod "90b676ff-cfc1-4760-bd3d-d88c1f47053f" (UID: "90b676ff-cfc1-4760-bd3d-d88c1f47053f"). InnerVolumeSpecName "kube-api-access-vvqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:04.586993 master-0 kubenswrapper[8988]: I1203 13:56:04.586904 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "90b676ff-cfc1-4760-bd3d-d88c1f47053f" (UID: "90b676ff-cfc1-4760-bd3d-d88c1f47053f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:56:04.665573 master-0 kubenswrapper[8988]: I1203 13:56:04.665489 8988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.665573 master-0 kubenswrapper[8988]: I1203 13:56:04.665567 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.665573 master-0 kubenswrapper[8988]: I1203 13:56:04.665584 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b676ff-cfc1-4760-bd3d-d88c1f47053f-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.665573 master-0 kubenswrapper[8988]: I1203 13:56:04.665597 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvqr7\" (UniqueName: \"kubernetes.io/projected/90b676ff-cfc1-4760-bd3d-d88c1f47053f-kube-api-access-vvqr7\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.666016 master-0 kubenswrapper[8988]: I1203 13:56:04.665615 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b676ff-cfc1-4760-bd3d-d88c1f47053f-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:04.784240 master-0 kubenswrapper[8988]: I1203 13:56:04.784044 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Dec 03 13:56:04.826857 master-0 kubenswrapper[8988]: I1203 13:56:04.826815 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2"] Dec 03 13:56:04.833503 master-0 kubenswrapper[8988]: I1203 13:56:04.833380 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p"] Dec 03 13:56:04.839126 master-0 kubenswrapper[8988]: I1203 13:56:04.839058 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:56:04.852201 master-0 kubenswrapper[8988]: I1203 13:56:04.852129 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ch7xd"] Dec 03 13:56:04.890984 master-0 kubenswrapper[8988]: W1203 13:56:04.890897 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3c1ebb9_f052_410b_a999_45e9b75b0e58.slice/crio-b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2 WatchSource:0}: Error finding container b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2: Status 404 returned error can't find the container with id b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2 Dec 03 13:56:05.043816 master-0 kubenswrapper[8988]: I1203 13:56:05.043764 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb"] Dec 03 13:56:05.069135 master-0 kubenswrapper[8988]: W1203 13:56:05.069041 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55351b08_d46d_4327_aa5e_ae17fdffdfb5.slice/crio-fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352 WatchSource:0}: Error finding container fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352: Status 404 returned error can't find the container with id fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352 Dec 03 13:56:05.097558 master-0 kubenswrapper[8988]: I1203 13:56:05.097499 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:56:05.097770 master-0 kubenswrapper[8988]: E1203 13:56:05.097745 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 13:56:05.097770 master-0 kubenswrapper[8988]: I1203 13:56:05.097762 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 13:56:05.097839 master-0 kubenswrapper[8988]: E1203 13:56:05.097771 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" containerName="controller-manager" Dec 03 13:56:05.097839 master-0 kubenswrapper[8988]: I1203 13:56:05.097780 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" containerName="controller-manager" Dec 03 13:56:05.097910 master-0 kubenswrapper[8988]: I1203 13:56:05.097890 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 13:56:05.097910 master-0 kubenswrapper[8988]: I1203 13:56:05.097907 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" containerName="controller-manager" Dec 03 13:56:05.098290 master-0 kubenswrapper[8988]: I1203 13:56:05.098235 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.110558 master-0 kubenswrapper[8988]: I1203 13:56:05.108793 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:56:05.179547 master-0 kubenswrapper[8988]: I1203 13:56:05.179473 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.179547 master-0 kubenswrapper[8988]: I1203 13:56:05.179526 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.179547 master-0 kubenswrapper[8988]: I1203 13:56:05.179561 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.179987 master-0 kubenswrapper[8988]: I1203 13:56:05.179580 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.179987 master-0 kubenswrapper[8988]: I1203 13:56:05.179615 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvbr\" (UniqueName: \"kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.196797 master-0 kubenswrapper[8988]: I1203 13:56:05.196708 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"a4482da3b6269c37fd64ed4a723b3d1c0f7f294b123b00a40d321fec5fbfbd20"} Dec 03 13:56:05.198101 master-0 kubenswrapper[8988]: I1203 13:56:05.198055 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerStarted","Data":"d923e2294dc5bd349ef1897a915245d9a43be1c9d681ac05585e4028bf44c392"} Dec 03 13:56:05.199534 master-0 kubenswrapper[8988]: I1203 13:56:05.199502 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ee6150c4-22d1-465b-a934-74d5e197d646","Type":"ContainerStarted","Data":"a3e5841f6f6d8362456d4cf786f11e54bc8b9d3300e0bfe95ffe518785f2d7ae"} Dec 03 13:56:05.200888 master-0 kubenswrapper[8988]: I1203 13:56:05.200852 8988 generic.go:334] "Generic (PLEG): container finished" podID="e97e1725-cb55-4ce3-952d-a4fd0731577d" containerID="338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237" exitCode=0 Dec 03 13:56:05.200972 master-0 kubenswrapper[8988]: I1203 13:56:05.200901 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerDied","Data":"338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237"} Dec 03 13:56:05.201460 master-0 kubenswrapper[8988]: I1203 13:56:05.201412 8988 scope.go:117] "RemoveContainer" containerID="338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237" Dec 03 13:56:05.204693 master-0 kubenswrapper[8988]: I1203 13:56:05.204605 8988 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="0a533903a7af56a0f95d40262b0fc66ff75c086f5871d9886f30269d0ad24011" exitCode=0 Dec 03 13:56:05.204693 master-0 kubenswrapper[8988]: I1203 13:56:05.204666 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerDied","Data":"0a533903a7af56a0f95d40262b0fc66ff75c086f5871d9886f30269d0ad24011"} Dec 03 13:56:05.219291 master-0 kubenswrapper[8988]: I1203 13:56:05.219194 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" event={"ID":"90b676ff-cfc1-4760-bd3d-d88c1f47053f","Type":"ContainerDied","Data":"60a506b173450d9c09057ffad94fd85c489f76103229f978d66bca32dc4aab51"} Dec 03 13:56:05.219291 master-0 kubenswrapper[8988]: I1203 13:56:05.219306 8988 scope.go:117] "RemoveContainer" containerID="012817d7e2f1bdd60d981d2157b0ef8b7e886395ea41883c9548a2bf3f0ac828" Dec 03 13:56:05.219706 master-0 kubenswrapper[8988]: I1203 13:56:05.219532 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9" Dec 03 13:56:05.225615 master-0 kubenswrapper[8988]: I1203 13:56:05.225553 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"61b6fae7a82c65416e7eb61155697378feee9b64a22c33dc9655e8c1e290fe92"} Dec 03 13:56:05.225615 master-0 kubenswrapper[8988]: I1203 13:56:05.225624 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"fad57f5bd666756300e4a070da9a2a7e83edc0926aeb5962bc6355c529e364ae"} Dec 03 13:56:05.225926 master-0 kubenswrapper[8988]: I1203 13:56:05.225871 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:56:05.228575 master-0 kubenswrapper[8988]: I1203 13:56:05.228524 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"300bdbe13ceab4cb35cb2752094e93a2034759f3ecd4444f35e3550cfb8561c6"} Dec 03 13:56:05.228739 master-0 kubenswrapper[8988]: I1203 13:56:05.228582 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"608e5faf1d1f7ffd467c7714def83c802d4d5d7a97b5dd1c6daac1ec34f49d3a"} Dec 03 13:56:05.230996 master-0 kubenswrapper[8988]: I1203 13:56:05.230951 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" event={"ID":"a528f2a3-5033-449c-b8d1-2317ecd02849","Type":"ContainerStarted","Data":"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6"} Dec 03 13:56:05.231327 master-0 kubenswrapper[8988]: I1203 13:56:05.231193 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" podUID="a528f2a3-5033-449c-b8d1-2317ecd02849" containerName="route-controller-manager" containerID="cri-o://1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6" gracePeriod=30 Dec 03 13:56:05.231823 master-0 kubenswrapper[8988]: I1203 13:56:05.231543 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:56:05.232446 master-0 kubenswrapper[8988]: I1203 13:56:05.232231 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2"} Dec 03 13:56:05.233223 master-0 kubenswrapper[8988]: I1203 13:56:05.233134 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-869c786959-vrvwt" Dec 03 13:56:05.233885 master-0 kubenswrapper[8988]: I1203 13:56:05.233843 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352"} Dec 03 13:56:05.241091 master-0 kubenswrapper[8988]: I1203 13:56:05.241030 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:56:05.281668 master-0 kubenswrapper[8988]: I1203 13:56:05.281391 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.281668 master-0 kubenswrapper[8988]: I1203 13:56:05.281438 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.281668 master-0 kubenswrapper[8988]: I1203 13:56:05.281525 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fvbr\" (UniqueName: \"kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.281668 master-0 kubenswrapper[8988]: I1203 13:56:05.281642 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.281668 master-0 kubenswrapper[8988]: I1203 13:56:05.281662 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.284813 master-0 kubenswrapper[8988]: I1203 13:56:05.284726 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.286281 master-0 kubenswrapper[8988]: I1203 13:56:05.286053 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.286735 master-0 kubenswrapper[8988]: I1203 13:56:05.286539 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.291921 master-0 kubenswrapper[8988]: I1203 13:56:05.287026 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.303291 master-0 kubenswrapper[8988]: I1203 13:56:05.303158 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5m4f8" podStartSLOduration=3.027119998 podStartE2EDuration="8.303056072s" podCreationTimestamp="2025-12-03 13:55:57 +0000 UTC" firstStartedPulling="2025-12-03 13:55:58.958002495 +0000 UTC m=+60.146070778" lastFinishedPulling="2025-12-03 13:56:04.233938569 +0000 UTC m=+65.422006852" observedRunningTime="2025-12-03 13:56:05.301504057 +0000 UTC m=+66.489572370" watchObservedRunningTime="2025-12-03 13:56:05.303056072 +0000 UTC m=+66.491124355" Dec 03 13:56:05.312943 master-0 kubenswrapper[8988]: I1203 13:56:05.312879 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fvbr\" (UniqueName: \"kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr\") pod \"controller-manager-6fb5f97c4d-bcdbq\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.321899 master-0 kubenswrapper[8988]: I1203 13:56:05.320528 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" podStartSLOduration=14.067470566 podStartE2EDuration="21.320490039s" podCreationTimestamp="2025-12-03 13:55:44 +0000 UTC" firstStartedPulling="2025-12-03 13:55:56.960522762 +0000 UTC m=+58.148591045" lastFinishedPulling="2025-12-03 13:56:04.213542235 +0000 UTC m=+65.401610518" observedRunningTime="2025-12-03 13:56:05.319351826 +0000 UTC m=+66.507420119" watchObservedRunningTime="2025-12-03 13:56:05.320490039 +0000 UTC m=+66.508558312" Dec 03 13:56:05.343548 master-0 kubenswrapper[8988]: I1203 13:56:05.343382 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:56:05.345450 master-0 kubenswrapper[8988]: I1203 13:56:05.345363 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6c4bfbb4d5-77st9"] Dec 03 13:56:05.384372 master-0 kubenswrapper[8988]: I1203 13:56:05.384293 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-869c786959-vrvwt"] Dec 03 13:56:05.384372 master-0 kubenswrapper[8988]: I1203 13:56:05.384380 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-869c786959-vrvwt"] Dec 03 13:56:05.428662 master-0 kubenswrapper[8988]: I1203 13:56:05.428564 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx"] Dec 03 13:56:05.429243 master-0 kubenswrapper[8988]: I1203 13:56:05.429197 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.432064 master-0 kubenswrapper[8988]: I1203 13:56:05.431992 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 13:56:05.432211 master-0 kubenswrapper[8988]: I1203 13:56:05.432174 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 13:56:05.432676 master-0 kubenswrapper[8988]: I1203 13:56:05.432619 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 13:56:05.466989 master-0 kubenswrapper[8988]: I1203 13:56:05.466946 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:05.485198 master-0 kubenswrapper[8988]: I1203 13:56:05.485097 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.485198 master-0 kubenswrapper[8988]: I1203 13:56:05.485179 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.485645 master-0 kubenswrapper[8988]: I1203 13:56:05.485319 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.485645 master-0 kubenswrapper[8988]: I1203 13:56:05.485349 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.485645 master-0 kubenswrapper[8988]: I1203 13:56:05.485387 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586027 master-0 kubenswrapper[8988]: I1203 13:56:05.585969 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586027 master-0 kubenswrapper[8988]: I1203 13:56:05.586035 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586659 master-0 kubenswrapper[8988]: I1203 13:56:05.586055 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586659 master-0 kubenswrapper[8988]: I1203 13:56:05.586076 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586659 master-0 kubenswrapper[8988]: I1203 13:56:05.586104 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586659 master-0 kubenswrapper[8988]: I1203 13:56:05.586196 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.586659 master-0 kubenswrapper[8988]: I1203 13:56:05.586499 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.591652 master-0 kubenswrapper[8988]: I1203 13:56:05.591626 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.592323 master-0 kubenswrapper[8988]: I1203 13:56:05.592293 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.609948 master-0 kubenswrapper[8988]: I1203 13:56:05.609894 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.629655 master-0 kubenswrapper[8988]: I1203 13:56:05.629558 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:56:05.686927 master-0 kubenswrapper[8988]: I1203 13:56:05.686823 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pncc2\" (UniqueName: \"kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2\") pod \"a528f2a3-5033-449c-b8d1-2317ecd02849\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " Dec 03 13:56:05.686927 master-0 kubenswrapper[8988]: I1203 13:56:05.686910 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca\") pod \"a528f2a3-5033-449c-b8d1-2317ecd02849\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " Dec 03 13:56:05.686927 master-0 kubenswrapper[8988]: I1203 13:56:05.686958 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert\") pod \"a528f2a3-5033-449c-b8d1-2317ecd02849\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " Dec 03 13:56:05.687473 master-0 kubenswrapper[8988]: I1203 13:56:05.686990 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config\") pod \"a528f2a3-5033-449c-b8d1-2317ecd02849\" (UID: \"a528f2a3-5033-449c-b8d1-2317ecd02849\") " Dec 03 13:56:05.688470 master-0 kubenswrapper[8988]: I1203 13:56:05.688436 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config" (OuterVolumeSpecName: "config") pod "a528f2a3-5033-449c-b8d1-2317ecd02849" (UID: "a528f2a3-5033-449c-b8d1-2317ecd02849"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:05.689422 master-0 kubenswrapper[8988]: I1203 13:56:05.689365 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca" (OuterVolumeSpecName: "client-ca") pod "a528f2a3-5033-449c-b8d1-2317ecd02849" (UID: "a528f2a3-5033-449c-b8d1-2317ecd02849"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:56:05.698595 master-0 kubenswrapper[8988]: I1203 13:56:05.698503 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2" (OuterVolumeSpecName: "kube-api-access-pncc2") pod "a528f2a3-5033-449c-b8d1-2317ecd02849" (UID: "a528f2a3-5033-449c-b8d1-2317ecd02849"). InnerVolumeSpecName "kube-api-access-pncc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:05.701945 master-0 kubenswrapper[8988]: I1203 13:56:05.701814 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a528f2a3-5033-449c-b8d1-2317ecd02849" (UID: "a528f2a3-5033-449c-b8d1-2317ecd02849"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:56:05.703095 master-0 kubenswrapper[8988]: I1203 13:56:05.703035 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:56:05.713672 master-0 kubenswrapper[8988]: W1203 13:56:05.711062 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod728bc191_0639_49c8_a939_a81759bec820.slice/crio-02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad WatchSource:0}: Error finding container 02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad: Status 404 returned error can't find the container with id 02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad Dec 03 13:56:05.789364 master-0 kubenswrapper[8988]: I1203 13:56:05.789297 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pncc2\" (UniqueName: \"kubernetes.io/projected/a528f2a3-5033-449c-b8d1-2317ecd02849-kube-api-access-pncc2\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:05.789364 master-0 kubenswrapper[8988]: I1203 13:56:05.789357 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:05.789364 master-0 kubenswrapper[8988]: I1203 13:56:05.789369 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a528f2a3-5033-449c-b8d1-2317ecd02849-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:05.789364 master-0 kubenswrapper[8988]: I1203 13:56:05.789379 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a528f2a3-5033-449c-b8d1-2317ecd02849-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:05.818859 master-0 kubenswrapper[8988]: I1203 13:56:05.814805 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:56:05.851527 master-0 kubenswrapper[8988]: W1203 13:56:05.851287 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec89938d_35a5_46ba_8c63_12489db18cbd.slice/crio-e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555 WatchSource:0}: Error finding container e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555: Status 404 returned error can't find the container with id e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555 Dec 03 13:56:06.270334 master-0 kubenswrapper[8988]: I1203 13:56:06.270217 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"ddbb768c864774f78204191462e3eed3712c04f6cc6d64ff756ae1b9f2a1eff5"} Dec 03 13:56:06.285537 master-0 kubenswrapper[8988]: I1203 13:56:06.283688 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" event={"ID":"728bc191-0639-49c8-a939-a81759bec820","Type":"ContainerStarted","Data":"2101f07d21b09bef3562a60720d029d2f6c54a9dc924b95901f0ed03cda4f409"} Dec 03 13:56:06.285537 master-0 kubenswrapper[8988]: I1203 13:56:06.283767 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" event={"ID":"728bc191-0639-49c8-a939-a81759bec820","Type":"ContainerStarted","Data":"02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad"} Dec 03 13:56:06.285537 master-0 kubenswrapper[8988]: I1203 13:56:06.285093 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:06.292688 master-0 kubenswrapper[8988]: I1203 13:56:06.292551 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ee6150c4-22d1-465b-a934-74d5e197d646","Type":"ContainerStarted","Data":"9bec250a37c6fd420e6a68fa34a40e8bf74f0c10fd29a6d0f7605bcfd065e230"} Dec 03 13:56:06.306148 master-0 kubenswrapper[8988]: I1203 13:56:06.305088 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"6595725af54f433a5152cc38b13503348ac89e555e54ae8506677c56b070363b"} Dec 03 13:56:06.306148 master-0 kubenswrapper[8988]: I1203 13:56:06.305156 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555"} Dec 03 13:56:06.312390 master-0 kubenswrapper[8988]: I1203 13:56:06.312332 8988 generic.go:334] "Generic (PLEG): container finished" podID="a528f2a3-5033-449c-b8d1-2317ecd02849" containerID="1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6" exitCode=0 Dec 03 13:56:06.312536 master-0 kubenswrapper[8988]: I1203 13:56:06.312447 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" event={"ID":"a528f2a3-5033-449c-b8d1-2317ecd02849","Type":"ContainerDied","Data":"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6"} Dec 03 13:56:06.312536 master-0 kubenswrapper[8988]: I1203 13:56:06.312484 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" event={"ID":"a528f2a3-5033-449c-b8d1-2317ecd02849","Type":"ContainerDied","Data":"189a25dbd3d8ef610cd0fe885d8dd509b064860f60af936e5ccb32318db4324a"} Dec 03 13:56:06.312536 master-0 kubenswrapper[8988]: I1203 13:56:06.312505 8988 scope.go:117] "RemoveContainer" containerID="1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6" Dec 03 13:56:06.312536 master-0 kubenswrapper[8988]: I1203 13:56:06.312518 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq" Dec 03 13:56:06.323840 master-0 kubenswrapper[8988]: I1203 13:56:06.318483 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:56:06.327090 master-0 kubenswrapper[8988]: I1203 13:56:06.326636 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"0865064c1bd01843e6eb79589acf3b6fd2673d981fffa22a338dc8de926dc48d"} Dec 03 13:56:06.327090 master-0 kubenswrapper[8988]: I1203 13:56:06.326708 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"e6b966f01e7e3408e771c2ea81f969784f4a00c9a2661b4bb153e1c828b9ea87"} Dec 03 13:56:06.359128 master-0 kubenswrapper[8988]: I1203 13:56:06.357071 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=4.357044106 podStartE2EDuration="4.357044106s" podCreationTimestamp="2025-12-03 13:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:06.356391437 +0000 UTC m=+67.544459750" watchObservedRunningTime="2025-12-03 13:56:06.357044106 +0000 UTC m=+67.545112409" Dec 03 13:56:06.404612 master-0 kubenswrapper[8988]: I1203 13:56:06.404526 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" podStartSLOduration=3.404506197 podStartE2EDuration="3.404506197s" podCreationTimestamp="2025-12-03 13:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:06.403095766 +0000 UTC m=+67.591164049" watchObservedRunningTime="2025-12-03 13:56:06.404506197 +0000 UTC m=+67.592574480" Dec 03 13:56:06.855751 master-0 kubenswrapper[8988]: I1203 13:56:06.855066 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" podStartSLOduration=1.855035394 podStartE2EDuration="1.855035394s" podCreationTimestamp="2025-12-03 13:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:06.852795839 +0000 UTC m=+68.040864132" watchObservedRunningTime="2025-12-03 13:56:06.855035394 +0000 UTC m=+68.043103677" Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: I1203 13:56:06.892212 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql"] Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: E1203 13:56:06.892576 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a528f2a3-5033-449c-b8d1-2317ecd02849" containerName="route-controller-manager" Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: I1203 13:56:06.892594 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="a528f2a3-5033-449c-b8d1-2317ecd02849" containerName="route-controller-manager" Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: I1203 13:56:06.892722 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="a528f2a3-5033-449c-b8d1-2317ecd02849" containerName="route-controller-manager" Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: I1203 13:56:06.893400 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.894868 master-0 kubenswrapper[8988]: I1203 13:56:06.894577 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql"] Dec 03 13:56:06.896026 master-0 kubenswrapper[8988]: I1203 13:56:06.895991 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 13:56:06.896333 master-0 kubenswrapper[8988]: I1203 13:56:06.896283 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 13:56:06.896453 master-0 kubenswrapper[8988]: I1203 13:56:06.896432 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 13:56:06.896667 master-0 kubenswrapper[8988]: I1203 13:56:06.896566 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 13:56:06.896667 master-0 kubenswrapper[8988]: I1203 13:56:06.896657 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 13:56:06.897699 master-0 kubenswrapper[8988]: I1203 13:56:06.897666 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 13:56:06.898722 master-0 kubenswrapper[8988]: I1203 13:56:06.898658 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 13:56:06.899298 master-0 kubenswrapper[8988]: I1203 13:56:06.899274 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 13:56:06.928504 master-0 kubenswrapper[8988]: I1203 13:56:06.924348 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podStartSLOduration=12.434436846 podStartE2EDuration="19.92431492s" podCreationTimestamp="2025-12-03 13:55:47 +0000 UTC" firstStartedPulling="2025-12-03 13:55:56.719233692 +0000 UTC m=+57.907301975" lastFinishedPulling="2025-12-03 13:56:04.209111766 +0000 UTC m=+65.397180049" observedRunningTime="2025-12-03 13:56:06.922039254 +0000 UTC m=+68.110107547" watchObservedRunningTime="2025-12-03 13:56:06.92431492 +0000 UTC m=+68.112383203" Dec 03 13:56:06.961646 master-0 kubenswrapper[8988]: I1203 13:56:06.961510 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:56:06.965544 master-0 kubenswrapper[8988]: I1203 13:56:06.965435 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66bd7f46c9-p8fcq"] Dec 03 13:56:06.981520 master-0 kubenswrapper[8988]: I1203 13:56:06.979748 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.981839 master-0 kubenswrapper[8988]: I1203 13:56:06.981566 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.981839 master-0 kubenswrapper[8988]: I1203 13:56:06.981617 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.981839 master-0 kubenswrapper[8988]: I1203 13:56:06.981704 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.981839 master-0 kubenswrapper[8988]: I1203 13:56:06.981789 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.982059 master-0 kubenswrapper[8988]: I1203 13:56:06.981883 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.982059 master-0 kubenswrapper[8988]: I1203 13:56:06.981939 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:06.982059 master-0 kubenswrapper[8988]: I1203 13:56:06.982013 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.040353 master-0 kubenswrapper[8988]: I1203 13:56:07.038983 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b676ff-cfc1-4760-bd3d-d88c1f47053f" path="/var/lib/kubelet/pods/90b676ff-cfc1-4760-bd3d-d88c1f47053f/volumes" Dec 03 13:56:07.045333 master-0 kubenswrapper[8988]: I1203 13:56:07.045242 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a528f2a3-5033-449c-b8d1-2317ecd02849" path="/var/lib/kubelet/pods/a528f2a3-5033-449c-b8d1-2317ecd02849/volumes" Dec 03 13:56:07.047229 master-0 kubenswrapper[8988]: I1203 13:56:07.047188 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" path="/var/lib/kubelet/pods/ce26e464-9a7c-4b22-a2b4-03706b351455/volumes" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.085976 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086044 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086080 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086235 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086301 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086352 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086382 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086410 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.087703 master-0 kubenswrapper[8988]: I1203 13:56:07.086567 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.088587 master-0 kubenswrapper[8988]: I1203 13:56:07.088554 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.092672 master-0 kubenswrapper[8988]: I1203 13:56:07.088658 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.092672 master-0 kubenswrapper[8988]: I1203 13:56:07.088747 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.095187 master-0 kubenswrapper[8988]: I1203 13:56:07.094541 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.099075 master-0 kubenswrapper[8988]: I1203 13:56:07.099000 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.099486 master-0 kubenswrapper[8988]: I1203 13:56:07.099429 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.109975 master-0 kubenswrapper[8988]: I1203 13:56:07.109755 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:07.221483 master-0 kubenswrapper[8988]: I1203 13:56:07.221396 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:56:07.221858 master-0 kubenswrapper[8988]: I1203 13:56:07.221695 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" containerName="installer" containerID="cri-o://aec082cc889dc6b0c5cb64df67295d0aaf07a03ff02ca780819a5ae5d89f24aa" gracePeriod=30 Dec 03 13:56:07.248457 master-0 kubenswrapper[8988]: I1203 13:56:07.248020 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:08.102864 master-0 kubenswrapper[8988]: I1203 13:56:08.102777 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:56:08.103655 master-0 kubenswrapper[8988]: I1203 13:56:08.103579 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.106533 master-0 kubenswrapper[8988]: I1203 13:56:08.106480 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:56:08.106712 master-0 kubenswrapper[8988]: I1203 13:56:08.106642 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:56:08.107622 master-0 kubenswrapper[8988]: I1203 13:56:08.106757 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:56:08.107622 master-0 kubenswrapper[8988]: I1203 13:56:08.107016 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:56:08.109828 master-0 kubenswrapper[8988]: I1203 13:56:08.109768 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:56:08.119139 master-0 kubenswrapper[8988]: I1203 13:56:08.119055 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:56:08.205376 master-0 kubenswrapper[8988]: I1203 13:56:08.205291 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.205713 master-0 kubenswrapper[8988]: I1203 13:56:08.205358 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.205713 master-0 kubenswrapper[8988]: I1203 13:56:08.205486 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqrv\" (UniqueName: \"kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.205713 master-0 kubenswrapper[8988]: I1203 13:56:08.205514 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.307356 master-0 kubenswrapper[8988]: I1203 13:56:08.307231 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.307356 master-0 kubenswrapper[8988]: I1203 13:56:08.307385 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.307732 master-0 kubenswrapper[8988]: I1203 13:56:08.307413 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.307732 master-0 kubenswrapper[8988]: I1203 13:56:08.307445 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqrv\" (UniqueName: \"kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.310100 master-0 kubenswrapper[8988]: I1203 13:56:08.309001 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.315828 master-0 kubenswrapper[8988]: I1203 13:56:08.313883 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.317681 master-0 kubenswrapper[8988]: I1203 13:56:08.317605 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.340105 master-0 kubenswrapper[8988]: I1203 13:56:08.339331 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqrv\" (UniqueName: \"kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv\") pod \"route-controller-manager-7f6f54d5f6-ch42s\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.455407 master-0 kubenswrapper[8988]: I1203 13:56:08.454653 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:08.832766 master-0 kubenswrapper[8988]: I1203 13:56:08.832708 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:56:08.848072 master-0 kubenswrapper[8988]: I1203 13:56:08.848020 8988 scope.go:117] "RemoveContainer" containerID="1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6" Dec 03 13:56:08.848741 master-0 kubenswrapper[8988]: E1203 13:56:08.848700 8988 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6\": container with ID starting with 1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6 not found: ID does not exist" containerID="1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6" Dec 03 13:56:08.848806 master-0 kubenswrapper[8988]: I1203 13:56:08.848741 8988 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6"} err="failed to get container status \"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6\": rpc error: code = NotFound desc = could not find container \"1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6\": container with ID starting with 1d657d76307333ef0e1cd31ce398ef56347871bf876ded17d4793fbc9a5bd9b6 not found: ID does not exist" Dec 03 13:56:09.239219 master-0 kubenswrapper[8988]: I1203 13:56:09.239126 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:56:09.380802 master-0 kubenswrapper[8988]: I1203 13:56:09.380704 8988 generic.go:334] "Generic (PLEG): container finished" podID="06d774e5-314a-49df-bdca-8e780c9af25a" containerID="27c1a40f3c3bc0e48435031dbfc32e5c0ade7b6afed6f0f6f463c37953bf90b2" exitCode=0 Dec 03 13:56:09.380802 master-0 kubenswrapper[8988]: I1203 13:56:09.380762 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerDied","Data":"27c1a40f3c3bc0e48435031dbfc32e5c0ade7b6afed6f0f6f463c37953bf90b2"} Dec 03 13:56:09.381131 master-0 kubenswrapper[8988]: I1203 13:56:09.380857 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:56:09.381500 master-0 kubenswrapper[8988]: I1203 13:56:09.381277 8988 scope.go:117] "RemoveContainer" containerID="27c1a40f3c3bc0e48435031dbfc32e5c0ade7b6afed6f0f6f463c37953bf90b2" Dec 03 13:56:09.381897 master-0 kubenswrapper[8988]: I1203 13:56:09.381707 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:56:09.399172 master-0 kubenswrapper[8988]: I1203 13:56:09.399100 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:56:09.624363 master-0 kubenswrapper[8988]: I1203 13:56:09.624291 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Dec 03 13:56:09.624973 master-0 kubenswrapper[8988]: I1203 13:56:09.624944 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.640073 master-0 kubenswrapper[8988]: I1203 13:56:09.639710 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Dec 03 13:56:09.737958 master-0 kubenswrapper[8988]: I1203 13:56:09.737870 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.738236 master-0 kubenswrapper[8988]: I1203 13:56:09.737998 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.738236 master-0 kubenswrapper[8988]: I1203 13:56:09.738087 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.839850 master-0 kubenswrapper[8988]: I1203 13:56:09.839769 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.839850 master-0 kubenswrapper[8988]: I1203 13:56:09.839854 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.840293 master-0 kubenswrapper[8988]: I1203 13:56:09.839916 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.840516 master-0 kubenswrapper[8988]: I1203 13:56:09.840477 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.840598 master-0 kubenswrapper[8988]: I1203 13:56:09.840534 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.859251 master-0 kubenswrapper[8988]: I1203 13:56:09.859166 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access\") pod \"installer-4-master-0\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:09.952332 master-0 kubenswrapper[8988]: I1203 13:56:09.951988 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:56:10.397171 master-0 kubenswrapper[8988]: I1203 13:56:10.396884 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:56:10.978826 master-0 kubenswrapper[8988]: I1203 13:56:10.978763 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Dec 03 13:56:11.021340 master-0 kubenswrapper[8988]: I1203 13:56:11.017723 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Dec 03 13:56:11.021340 master-0 kubenswrapper[8988]: I1203 13:56:11.018368 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.031072 master-0 kubenswrapper[8988]: I1203 13:56:11.029405 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-4xstp" Dec 03 13:56:11.031072 master-0 kubenswrapper[8988]: I1203 13:56:11.029683 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 13:56:11.073566 master-0 kubenswrapper[8988]: I1203 13:56:11.071834 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.073566 master-0 kubenswrapper[8988]: I1203 13:56:11.071884 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.073566 master-0 kubenswrapper[8988]: I1203 13:56:11.071911 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.073566 master-0 kubenswrapper[8988]: I1203 13:56:11.072815 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Dec 03 13:56:11.128211 master-0 kubenswrapper[8988]: I1203 13:56:11.123875 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:56:11.169685 master-0 kubenswrapper[8988]: I1203 13:56:11.169573 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql"] Dec 03 13:56:11.180068 master-0 kubenswrapper[8988]: I1203 13:56:11.179059 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.180068 master-0 kubenswrapper[8988]: I1203 13:56:11.179128 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.180068 master-0 kubenswrapper[8988]: I1203 13:56:11.179160 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.180068 master-0 kubenswrapper[8988]: I1203 13:56:11.179276 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.180068 master-0 kubenswrapper[8988]: I1203 13:56:11.179327 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.233726 master-0 kubenswrapper[8988]: I1203 13:56:11.233626 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.394526 master-0 kubenswrapper[8988]: I1203 13:56:11.393485 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:56:11.434995 master-0 kubenswrapper[8988]: I1203 13:56:11.434925 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"18fec28d3c23557c11d08f9c713623d04b2f8661479f3eb4912bd29ec38e3095"} Dec 03 13:56:11.444148 master-0 kubenswrapper[8988]: I1203 13:56:11.444003 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"7b7b39cbc465950099d443ba55157c7cb3156ce39395acb012a6ca2ccd08dfcd"} Dec 03 13:56:11.445000 master-0 kubenswrapper[8988]: I1203 13:56:11.444949 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:56:11.446104 master-0 kubenswrapper[8988]: I1203 13:56:11.445852 8988 patch_prober.go:28] interesting pod/marketplace-operator-7d67745bb7-dwcxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Dec 03 13:56:11.446104 master-0 kubenswrapper[8988]: I1203 13:56:11.445918 8988 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Dec 03 13:56:11.478472 master-0 kubenswrapper[8988]: I1203 13:56:11.478405 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"1819ee8dddbb1231277ada301d8d1ef733f0d9656e6fbb70ea5bc8f0833fffdf"} Dec 03 13:56:11.487069 master-0 kubenswrapper[8988]: I1203 13:56:11.486991 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"725fa88d-f29d-4dee-bfba-6e1c4506f73c","Type":"ContainerStarted","Data":"d042c55327fe42c18032feedcbcb89d5a0275f42d648331d12b63fb7f5eab7f6"} Dec 03 13:56:11.490733 master-0 kubenswrapper[8988]: I1203 13:56:11.490685 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" event={"ID":"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6","Type":"ContainerStarted","Data":"214cfb90bd01e2a38cc00a74c6415843347ddcbd2c5b20e0758acbc9d4f19c58"} Dec 03 13:56:11.493606 master-0 kubenswrapper[8988]: I1203 13:56:11.493587 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"f1e6e9b5cc1123cc229e5b5c55833cf8c55b534df02d94f2822bf88d34528957"} Dec 03 13:56:11.500402 master-0 kubenswrapper[8988]: I1203 13:56:11.499573 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerStarted","Data":"bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881"} Dec 03 13:56:11.516483 master-0 kubenswrapper[8988]: I1203 13:56:11.512607 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"b4b917452a521c232a7ed0b8210f7d170f0475780f45ece2de8eb5bc177f4ed7"} Dec 03 13:56:12.244829 master-0 kubenswrapper[8988]: I1203 13:56:12.243753 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Dec 03 13:56:12.251544 master-0 kubenswrapper[8988]: W1203 13:56:12.250522 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9477082b_005c_4ff5_812a_7c3230f60da2.slice/crio-3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305 WatchSource:0}: Error finding container 3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305: Status 404 returned error can't find the container with id 3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305 Dec 03 13:56:12.541138 master-0 kubenswrapper[8988]: I1203 13:56:12.537395 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"725fa88d-f29d-4dee-bfba-6e1c4506f73c","Type":"ContainerStarted","Data":"6f689de8a1f834cf175e7a94d531ac0bc5fbc598832080a29d70489ce59fa461"} Dec 03 13:56:12.550386 master-0 kubenswrapper[8988]: I1203 13:56:12.549503 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" event={"ID":"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6","Type":"ContainerStarted","Data":"55445c84ee27dfac14466bc7e8118d367fd0229276697cf9717729560bc34702"} Dec 03 13:56:12.550386 master-0 kubenswrapper[8988]: I1203 13:56:12.550241 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:12.559612 master-0 kubenswrapper[8988]: I1203 13:56:12.556951 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:56:12.560765 master-0 kubenswrapper[8988]: I1203 13:56:12.560039 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9477082b-005c-4ff5-812a-7c3230f60da2","Type":"ContainerStarted","Data":"3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305"} Dec 03 13:56:12.587065 master-0 kubenswrapper[8988]: I1203 13:56:12.575865 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerStarted","Data":"4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc"} Dec 03 13:56:12.587065 master-0 kubenswrapper[8988]: I1203 13:56:12.583139 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=3.5831120309999998 podStartE2EDuration="3.583112031s" podCreationTimestamp="2025-12-03 13:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:12.5676066 +0000 UTC m=+73.755674873" watchObservedRunningTime="2025-12-03 13:56:12.583112031 +0000 UTC m=+73.771180314" Dec 03 13:56:12.602301 master-0 kubenswrapper[8988]: I1203 13:56:12.596517 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"5fe6c28d1bda0d80f5409b45f5f8db53ee77efab8e8303d60d2351d01ed9439c"} Dec 03 13:56:12.619300 master-0 kubenswrapper[8988]: I1203 13:56:12.611280 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" podStartSLOduration=9.611227479 podStartE2EDuration="9.611227479s" podCreationTimestamp="2025-12-03 13:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:12.603831014 +0000 UTC m=+73.791899307" watchObservedRunningTime="2025-12-03 13:56:12.611227479 +0000 UTC m=+73.799295762" Dec 03 13:56:12.625277 master-0 kubenswrapper[8988]: I1203 13:56:12.624068 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:56:13.604314 master-0 kubenswrapper[8988]: I1203 13:56:13.604212 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9477082b-005c-4ff5-812a-7c3230f60da2","Type":"ContainerStarted","Data":"329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee"} Dec 03 13:56:14.182332 master-0 kubenswrapper[8988]: I1203 13:56:14.181209 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.181181075 podStartE2EDuration="3.181181075s" podCreationTimestamp="2025-12-03 13:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:56:14.179908488 +0000 UTC m=+75.367976781" watchObservedRunningTime="2025-12-03 13:56:14.181181075 +0000 UTC m=+75.369249378" Dec 03 13:56:16.653488 master-0 kubenswrapper[8988]: I1203 13:56:16.653386 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:56:16.880324 master-0 kubenswrapper[8988]: I1203 13:56:16.874661 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 13:56:16.880324 master-0 kubenswrapper[8988]: I1203 13:56:16.880139 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:16.899308 master-0 kubenswrapper[8988]: I1203 13:56:16.891759 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 13:56:16.915288 master-0 kubenswrapper[8988]: I1203 13:56:16.905917 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 13:56:17.025835 master-0 kubenswrapper[8988]: I1203 13:56:17.025575 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.025835 master-0 kubenswrapper[8988]: I1203 13:56:17.025649 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.127227 master-0 kubenswrapper[8988]: I1203 13:56:17.127159 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.127539 master-0 kubenswrapper[8988]: I1203 13:56:17.127272 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.131235 master-0 kubenswrapper[8988]: I1203 13:56:17.131160 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.147138 master-0 kubenswrapper[8988]: I1203 13:56:17.147083 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.316152 master-0 kubenswrapper[8988]: I1203 13:56:17.316089 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:56:17.629204 master-0 kubenswrapper[8988]: I1203 13:56:17.629059 8988 generic.go:334] "Generic (PLEG): container finished" podID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" containerID="031ccde9164bce9c6766c132214d7fc14f96617b1164fd862cc2ac3b1e1bb8bf" exitCode=0 Dec 03 13:56:17.629204 master-0 kubenswrapper[8988]: I1203 13:56:17.629126 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerDied","Data":"031ccde9164bce9c6766c132214d7fc14f96617b1164fd862cc2ac3b1e1bb8bf"} Dec 03 13:56:17.630446 master-0 kubenswrapper[8988]: I1203 13:56:17.630406 8988 scope.go:117] "RemoveContainer" containerID="031ccde9164bce9c6766c132214d7fc14f96617b1164fd862cc2ac3b1e1bb8bf" Dec 03 13:56:18.223597 master-0 kubenswrapper[8988]: I1203 13:56:18.223207 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 13:56:18.229393 master-0 kubenswrapper[8988]: W1203 13:56:18.229317 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22673f47_9484_4eed_bbce_888588c754ed.slice/crio-dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e WatchSource:0}: Error finding container dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e: Status 404 returned error can't find the container with id dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e Dec 03 13:56:18.368719 master-0 kubenswrapper[8988]: I1203 13:56:18.368531 8988 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Dec 03 13:56:18.369014 master-0 kubenswrapper[8988]: I1203 13:56:18.368938 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcdctl" containerID="cri-o://886fbb171cc796081daa33c863e0ffd8e881f69d0055d5d49edec8b6ff9d962d" gracePeriod=30 Dec 03 13:56:18.369067 master-0 kubenswrapper[8988]: I1203 13:56:18.369005 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcd" containerID="cri-o://d411a9d4993d118dc0e255c06261c1eb2d14f7c6ba1e4128eeb20ef007aba795" gracePeriod=30 Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.396524 8988 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: E1203 13:56:18.396804 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcd" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.396819 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcd" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: E1203 13:56:18.396830 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcdctl" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.396837 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcdctl" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.396923 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcd" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.396936 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b95a38663dd6fe34e183818a475977" containerName="etcdctl" Dec 03 13:56:18.402113 master-0 kubenswrapper[8988]: I1203 13:56:18.398677 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544108 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544221 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544243 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544301 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544326 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.544781 master-0 kubenswrapper[8988]: I1203 13:56:18.544400 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.642010 master-0 kubenswrapper[8988]: I1203 13:56:18.641926 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"f54a591516c12cb6891bab583755b73f8ae231401e36a574e299594f8347c7ec"} Dec 03 13:56:18.642362 master-0 kubenswrapper[8988]: I1203 13:56:18.642189 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:56:18.645413 master-0 kubenswrapper[8988]: I1203 13:56:18.645348 8988 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="f40d880c3949fa39a1d71100e4b83bfcc1b96c7301f0caf0577fed05fde9d024" exitCode=0 Dec 03 13:56:18.645580 master-0 kubenswrapper[8988]: I1203 13:56:18.645508 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerDied","Data":"f40d880c3949fa39a1d71100e4b83bfcc1b96c7301f0caf0577fed05fde9d024"} Dec 03 13:56:18.646824 master-0 kubenswrapper[8988]: I1203 13:56:18.646573 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.646824 master-0 kubenswrapper[8988]: I1203 13:56:18.646780 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.646884 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.646919 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.646973 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647021 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647118 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647188 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647415 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647454 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647522 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.648621 master-0 kubenswrapper[8988]: I1203 13:56:18.647600 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:56:18.649468 master-0 kubenswrapper[8988]: I1203 13:56:18.649415 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760"} Dec 03 13:56:18.653859 master-0 kubenswrapper[8988]: I1203 13:56:18.653797 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c"} Dec 03 13:56:18.653955 master-0 kubenswrapper[8988]: I1203 13:56:18.653867 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e"} Dec 03 13:56:19.670150 master-0 kubenswrapper[8988]: I1203 13:56:19.670061 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b"} Dec 03 13:56:21.684935 master-0 kubenswrapper[8988]: I1203 13:56:21.684830 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092"} Dec 03 13:56:22.247919 master-0 kubenswrapper[8988]: I1203 13:56:22.247767 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:22.247919 master-0 kubenswrapper[8988]: I1203 13:56:22.247871 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:56:31.429200 master-0 kubenswrapper[8988]: E1203 13:56:31.429015 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Dec 03 13:56:31.429980 master-0 kubenswrapper[8988]: I1203 13:56:31.429945 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 13:56:31.744825 master-0 kubenswrapper[8988]: I1203 13:56:31.744733 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_0056326a-6d5e-47cd-b350-c5c6287cfe2a/installer/0.log" Dec 03 13:56:31.745195 master-0 kubenswrapper[8988]: I1203 13:56:31.744831 8988 generic.go:334] "Generic (PLEG): container finished" podID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" containerID="aec082cc889dc6b0c5cb64df67295d0aaf07a03ff02ca780819a5ae5d89f24aa" exitCode=1 Dec 03 13:56:31.745195 master-0 kubenswrapper[8988]: I1203 13:56:31.744978 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"0056326a-6d5e-47cd-b350-c5c6287cfe2a","Type":"ContainerDied","Data":"aec082cc889dc6b0c5cb64df67295d0aaf07a03ff02ca780819a5ae5d89f24aa"} Dec 03 13:56:31.746759 master-0 kubenswrapper[8988]: I1203 13:56:31.746672 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"2d9b8c691d5f3ee7b94a063c9932a9e9584dbd2cc766bb12c9c9139903e78355"} Dec 03 13:56:32.248687 master-0 kubenswrapper[8988]: I1203 13:56:32.248610 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:56:32.249116 master-0 kubenswrapper[8988]: I1203 13:56:32.248691 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:32.324080 master-0 kubenswrapper[8988]: I1203 13:56:32.324017 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_0056326a-6d5e-47cd-b350-c5c6287cfe2a/installer/0.log" Dec 03 13:56:32.324400 master-0 kubenswrapper[8988]: I1203 13:56:32.324102 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:56:32.423510 master-0 kubenswrapper[8988]: I1203 13:56:32.422903 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir\") pod \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " Dec 03 13:56:32.423510 master-0 kubenswrapper[8988]: I1203 13:56:32.423141 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock\") pod \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " Dec 03 13:56:32.423510 master-0 kubenswrapper[8988]: I1203 13:56:32.423210 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access\") pod \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\" (UID: \"0056326a-6d5e-47cd-b350-c5c6287cfe2a\") " Dec 03 13:56:32.423510 master-0 kubenswrapper[8988]: I1203 13:56:32.423315 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0056326a-6d5e-47cd-b350-c5c6287cfe2a" (UID: "0056326a-6d5e-47cd-b350-c5c6287cfe2a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:32.423510 master-0 kubenswrapper[8988]: I1203 13:56:32.423378 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock" (OuterVolumeSpecName: "var-lock") pod "0056326a-6d5e-47cd-b350-c5c6287cfe2a" (UID: "0056326a-6d5e-47cd-b350-c5c6287cfe2a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:32.424325 master-0 kubenswrapper[8988]: I1203 13:56:32.423702 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:32.424325 master-0 kubenswrapper[8988]: I1203 13:56:32.423716 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0056326a-6d5e-47cd-b350-c5c6287cfe2a-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:32.429513 master-0 kubenswrapper[8988]: I1203 13:56:32.429458 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0056326a-6d5e-47cd-b350-c5c6287cfe2a" (UID: "0056326a-6d5e-47cd-b350-c5c6287cfe2a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:32.525214 master-0 kubenswrapper[8988]: I1203 13:56:32.525019 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0056326a-6d5e-47cd-b350-c5c6287cfe2a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:32.755491 master-0 kubenswrapper[8988]: I1203 13:56:32.755416 8988 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d" exitCode=1 Dec 03 13:56:32.755491 master-0 kubenswrapper[8988]: I1203 13:56:32.755492 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d"} Dec 03 13:56:32.755849 master-0 kubenswrapper[8988]: I1203 13:56:32.755533 8988 scope.go:117] "RemoveContainer" containerID="7a8ac7f1eaa0fb2be0a1133bae4e58796d9dd0e618d4f3e8889a09897fd6e89b" Dec 03 13:56:32.756316 master-0 kubenswrapper[8988]: I1203 13:56:32.756247 8988 scope.go:117] "RemoveContainer" containerID="34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d" Dec 03 13:56:32.758163 master-0 kubenswrapper[8988]: I1203 13:56:32.757996 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_0056326a-6d5e-47cd-b350-c5c6287cfe2a/installer/0.log" Dec 03 13:56:32.758273 master-0 kubenswrapper[8988]: I1203 13:56:32.758182 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Dec 03 13:56:32.758273 master-0 kubenswrapper[8988]: I1203 13:56:32.758219 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"0056326a-6d5e-47cd-b350-c5c6287cfe2a","Type":"ContainerDied","Data":"7b6c6e3dccf66eea8dc77784a207d387c88a1f16890ee62b88e320670dcc2abd"} Dec 03 13:56:32.762833 master-0 kubenswrapper[8988]: I1203 13:56:32.762782 8988 generic.go:334] "Generic (PLEG): container finished" podID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerID="8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e" exitCode=0 Dec 03 13:56:32.762943 master-0 kubenswrapper[8988]: I1203 13:56:32.762865 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6","Type":"ContainerDied","Data":"8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e"} Dec 03 13:56:32.765019 master-0 kubenswrapper[8988]: I1203 13:56:32.764947 8988 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589" exitCode=0 Dec 03 13:56:32.765019 master-0 kubenswrapper[8988]: I1203 13:56:32.764999 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589"} Dec 03 13:56:32.820521 master-0 kubenswrapper[8988]: I1203 13:56:32.820464 8988 scope.go:117] "RemoveContainer" containerID="aec082cc889dc6b0c5cb64df67295d0aaf07a03ff02ca780819a5ae5d89f24aa" Dec 03 13:56:33.781635 master-0 kubenswrapper[8988]: I1203 13:56:33.781531 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c"} Dec 03 13:56:34.129418 master-0 kubenswrapper[8988]: I1203 13:56:34.129352 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 13:56:34.247425 master-0 kubenswrapper[8988]: I1203 13:56:34.247344 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access\") pod \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " Dec 03 13:56:34.247735 master-0 kubenswrapper[8988]: I1203 13:56:34.247460 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock\") pod \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " Dec 03 13:56:34.247735 master-0 kubenswrapper[8988]: I1203 13:56:34.247535 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir\") pod \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\" (UID: \"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6\") " Dec 03 13:56:34.247735 master-0 kubenswrapper[8988]: I1203 13:56:34.247567 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock" (OuterVolumeSpecName: "var-lock") pod "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" (UID: "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:34.247735 master-0 kubenswrapper[8988]: I1203 13:56:34.247685 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" (UID: "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:34.247918 master-0 kubenswrapper[8988]: I1203 13:56:34.247873 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:34.247918 master-0 kubenswrapper[8988]: I1203 13:56:34.247902 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:34.250925 master-0 kubenswrapper[8988]: I1203 13:56:34.250878 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" (UID: "2cfe6ad9-3234-47eb-8512-87dd87f7b3a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:34.349755 master-0 kubenswrapper[8988]: I1203 13:56:34.349653 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2cfe6ad9-3234-47eb-8512-87dd87f7b3a6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:34.793113 master-0 kubenswrapper[8988]: I1203 13:56:34.792922 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"2cfe6ad9-3234-47eb-8512-87dd87f7b3a6","Type":"ContainerDied","Data":"6721d25c3e1746543af1f9a5ba41f4231e36342be8a55d2319256cbd81592116"} Dec 03 13:56:34.793113 master-0 kubenswrapper[8988]: I1203 13:56:34.793029 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6721d25c3e1746543af1f9a5ba41f4231e36342be8a55d2319256cbd81592116" Dec 03 13:56:34.793113 master-0 kubenswrapper[8988]: I1203 13:56:34.792962 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 13:56:34.796921 master-0 kubenswrapper[8988]: I1203 13:56:34.796847 8988 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c" exitCode=1 Dec 03 13:56:34.796921 master-0 kubenswrapper[8988]: I1203 13:56:34.796916 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c"} Dec 03 13:56:34.797073 master-0 kubenswrapper[8988]: I1203 13:56:34.796972 8988 scope.go:117] "RemoveContainer" containerID="9b70cc3592f40731e0c5d65f8d5e5454bb2c29bf43d6d350722f294c1e320ea2" Dec 03 13:56:34.798026 master-0 kubenswrapper[8988]: I1203 13:56:34.797985 8988 scope.go:117] "RemoveContainer" containerID="95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c" Dec 03 13:56:35.152595 master-0 kubenswrapper[8988]: E1203 13:56:35.152235 8988 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:56:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:56:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:56:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:56:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\"],\\\"sizeBytes\\\":1631769045},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c\\\"],\\\"sizeBytes\\\":1232076476},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a\\\"],\\\"sizeBytes\\\":983731853},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\"],\\\"sizeBytes\\\":938321573},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf\\\"],\\\"sizeBytes\\\":870581225},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e5c0acdd03dc840d7345ae397feaf6147a32a8fef89a0ac2ddc8d14b068c9ff\\\"],\\\"sizeBytes\\\":767313881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4\\\"],\\\"sizeBytes\\\":682385666},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda\\\"],\\\"sizeBytes\\\":677540255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14\\\"],\\\"sizeBytes\\\":672854011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8\\\"],\\\"sizeBytes\\\":616123373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51\\\"],\\\"sizeBytes\\\":583850203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe\\\"],\\\"sizeBytes\\\":576621883},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60\\\"],\\\"sizeBytes\\\":552687886},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf\\\"],\\\"sizeBytes\\\":543241813},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\"],\\\"sizeBytes\\\":532668041},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da\\\"],\\\"sizeBytes\\\":532338751},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\"],\\\"sizeBytes\\\":512852463},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7\\\"],\\\"sizeBytes\\\":512468025},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\"],\\\"sizeBytes\\\":509451797},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110\\\"],\\\"sizeBytes\\\":507701628},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17\\\"],\\\"sizeBytes\\\":506755373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419\\\"],\\\"sizeBytes\\\":505663073},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908\\\"],\\\"sizeBytes\\\":505642108},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36\\\"],\\\"sizeBytes\\\":503354646},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de\\\"],\\\"sizeBytes\\\":503025552},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395\\\"],\\\"sizeBytes\\\":502450335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9\\\"],\\\"sizeBytes\\\":500957387},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\"],\\\"sizeBytes\\\":500863090},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\\\"],\\\"sizeBytes\\\":499719811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5\\\"],\\\"sizeBytes\\\":499096673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08\\\"],\\\"sizeBytes\\\":489542560},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8\\\"],\\\"sizeBytes\\\":481573011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20\\\"],\\\"sizeBytes\\\":478931717},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6\\\"],\\\"sizeBytes\\\":478655954},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca\\\"],\\\"sizeBytes\\\":462741734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\"],\\\"sizeBytes\\\":459566623},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831\\\"],\\\"sizeBytes\\\":458183681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601\\\"],\\\"sizeBytes\\\":452603646},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e\\\"],\\\"sizeBytes\\\":451053419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d\\\"],\\\"sizeBytes\\\":443305841},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81\\\"],\\\"sizeBytes\\\":442523452},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f\\\"],\\\"sizeBytes\\\":437751308},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926\\\"],\\\"sizeBytes\\\":406067436},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f\\\"],\\\"sizeBytes\\\":401824348},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd\\\"],\\\"sizeBytes\\\":391002580}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:35.289753 master-0 kubenswrapper[8988]: E1203 13:56:35.289584 8988 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:36.809796 master-0 kubenswrapper[8988]: I1203 13:56:36.809708 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6"} Dec 03 13:56:40.437919 master-0 kubenswrapper[8988]: I1203 13:56:40.437826 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:56:40.805117 master-0 kubenswrapper[8988]: I1203 13:56:40.805009 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:56:42.249344 master-0 kubenswrapper[8988]: I1203 13:56:42.249200 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:56:42.249344 master-0 kubenswrapper[8988]: I1203 13:56:42.249315 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:43.805329 master-0 kubenswrapper[8988]: I1203 13:56:43.805202 8988 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:45.153432 master-0 kubenswrapper[8988]: E1203 13:56:45.153323 8988 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:45.291122 master-0 kubenswrapper[8988]: E1203 13:56:45.290983 8988 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:45.773378 master-0 kubenswrapper[8988]: E1203 13:56:45.773153 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Dec 03 13:56:45.866666 master-0 kubenswrapper[8988]: I1203 13:56:45.866569 8988 generic.go:334] "Generic (PLEG): container finished" podID="41b95a38663dd6fe34e183818a475977" containerID="d411a9d4993d118dc0e255c06261c1eb2d14f7c6ba1e4128eeb20ef007aba795" exitCode=0 Dec 03 13:56:46.874892 master-0 kubenswrapper[8988]: I1203 13:56:46.874779 8988 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3" exitCode=0 Dec 03 13:56:46.874892 master-0 kubenswrapper[8988]: I1203 13:56:46.874844 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3"} Dec 03 13:56:50.900172 master-0 kubenswrapper[8988]: I1203 13:56:50.900090 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_41b95a38663dd6fe34e183818a475977/etcdctl/0.log" Dec 03 13:56:50.900172 master-0 kubenswrapper[8988]: I1203 13:56:50.900169 8988 generic.go:334] "Generic (PLEG): container finished" podID="41b95a38663dd6fe34e183818a475977" containerID="886fbb171cc796081daa33c863e0ffd8e881f69d0055d5d49edec8b6ff9d962d" exitCode=137 Dec 03 13:56:52.250082 master-0 kubenswrapper[8988]: I1203 13:56:52.249961 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:56:52.250082 master-0 kubenswrapper[8988]: I1203 13:56:52.250079 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:52.340068 master-0 kubenswrapper[8988]: I1203 13:56:52.339954 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_41b95a38663dd6fe34e183818a475977/etcdctl/0.log" Dec 03 13:56:52.340068 master-0 kubenswrapper[8988]: I1203 13:56:52.340084 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:56:52.380759 master-0 kubenswrapper[8988]: E1203 13:56:52.380395 8988 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.187db92087e25fa8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:41b95a38663dd6fe34e183818a475977,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:56:18.36895428 +0000 UTC m=+79.557022613,LastTimestamp:2025-12-03 13:56:18.36895428 +0000 UTC m=+79.557022613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:56:52.483895 master-0 kubenswrapper[8988]: I1203 13:56:52.483744 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") pod \"41b95a38663dd6fe34e183818a475977\" (UID: \"41b95a38663dd6fe34e183818a475977\") " Dec 03 13:56:52.483895 master-0 kubenswrapper[8988]: I1203 13:56:52.483871 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") pod \"41b95a38663dd6fe34e183818a475977\" (UID: \"41b95a38663dd6fe34e183818a475977\") " Dec 03 13:56:52.483895 master-0 kubenswrapper[8988]: I1203 13:56:52.483910 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir" (OuterVolumeSpecName: "data-dir") pod "41b95a38663dd6fe34e183818a475977" (UID: "41b95a38663dd6fe34e183818a475977"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:52.484500 master-0 kubenswrapper[8988]: I1203 13:56:52.484055 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs" (OuterVolumeSpecName: "certs") pod "41b95a38663dd6fe34e183818a475977" (UID: "41b95a38663dd6fe34e183818a475977"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:52.484500 master-0 kubenswrapper[8988]: I1203 13:56:52.484342 8988 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-data-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:52.484500 master-0 kubenswrapper[8988]: I1203 13:56:52.484383 8988 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/41b95a38663dd6fe34e183818a475977-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:52.922603 master-0 kubenswrapper[8988]: I1203 13:56:52.922499 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5f574c6c79-86bh9_5aa67ace-d03a-4d06-9fb5-24777b65f2cc/kube-scheduler-operator-container/1.log" Dec 03 13:56:52.923405 master-0 kubenswrapper[8988]: I1203 13:56:52.923215 8988 generic.go:334] "Generic (PLEG): container finished" podID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" containerID="b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760" exitCode=255 Dec 03 13:56:52.923533 master-0 kubenswrapper[8988]: I1203 13:56:52.923320 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerDied","Data":"b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760"} Dec 03 13:56:52.923533 master-0 kubenswrapper[8988]: I1203 13:56:52.923482 8988 scope.go:117] "RemoveContainer" containerID="031ccde9164bce9c6766c132214d7fc14f96617b1164fd862cc2ac3b1e1bb8bf" Dec 03 13:56:52.924044 master-0 kubenswrapper[8988]: I1203 13:56:52.924002 8988 scope.go:117] "RemoveContainer" containerID="b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760" Dec 03 13:56:52.924353 master-0 kubenswrapper[8988]: E1203 13:56:52.924294 8988 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-5f574c6c79-86bh9_openshift-kube-scheduler-operator(5aa67ace-d03a-4d06-9fb5-24777b65f2cc)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 13:56:52.926102 master-0 kubenswrapper[8988]: I1203 13:56:52.926056 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_41b95a38663dd6fe34e183818a475977/etcdctl/0.log" Dec 03 13:56:52.926383 master-0 kubenswrapper[8988]: I1203 13:56:52.926312 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:56:52.928307 master-0 kubenswrapper[8988]: I1203 13:56:52.928244 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ee6150c4-22d1-465b-a934-74d5e197d646/installer/0.log" Dec 03 13:56:52.928430 master-0 kubenswrapper[8988]: I1203 13:56:52.928317 8988 generic.go:334] "Generic (PLEG): container finished" podID="ee6150c4-22d1-465b-a934-74d5e197d646" containerID="9bec250a37c6fd420e6a68fa34a40e8bf74f0c10fd29a6d0f7605bcfd065e230" exitCode=1 Dec 03 13:56:52.928430 master-0 kubenswrapper[8988]: I1203 13:56:52.928360 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ee6150c4-22d1-465b-a934-74d5e197d646","Type":"ContainerDied","Data":"9bec250a37c6fd420e6a68fa34a40e8bf74f0c10fd29a6d0f7605bcfd065e230"} Dec 03 13:56:53.030355 master-0 kubenswrapper[8988]: I1203 13:56:53.030243 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b95a38663dd6fe34e183818a475977" path="/var/lib/kubelet/pods/41b95a38663dd6fe34e183818a475977/volumes" Dec 03 13:56:53.030795 master-0 kubenswrapper[8988]: I1203 13:56:53.030706 8988 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Dec 03 13:56:53.805357 master-0 kubenswrapper[8988]: I1203 13:56:53.805188 8988 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:54.079762 master-0 kubenswrapper[8988]: I1203 13:56:54.078533 8988 scope.go:117] "RemoveContainer" containerID="d411a9d4993d118dc0e255c06261c1eb2d14f7c6ba1e4128eeb20ef007aba795" Dec 03 13:56:54.097218 master-0 kubenswrapper[8988]: I1203 13:56:54.097163 8988 scope.go:117] "RemoveContainer" containerID="886fbb171cc796081daa33c863e0ffd8e881f69d0055d5d49edec8b6ff9d962d" Dec 03 13:56:54.259567 master-0 kubenswrapper[8988]: I1203 13:56:54.259515 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ee6150c4-22d1-465b-a934-74d5e197d646/installer/0.log" Dec 03 13:56:54.259699 master-0 kubenswrapper[8988]: I1203 13:56:54.259624 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:54.411578 master-0 kubenswrapper[8988]: I1203 13:56:54.411162 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir\") pod \"ee6150c4-22d1-465b-a934-74d5e197d646\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " Dec 03 13:56:54.411578 master-0 kubenswrapper[8988]: I1203 13:56:54.411349 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ee6150c4-22d1-465b-a934-74d5e197d646" (UID: "ee6150c4-22d1-465b-a934-74d5e197d646"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:54.411578 master-0 kubenswrapper[8988]: I1203 13:56:54.411432 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access\") pod \"ee6150c4-22d1-465b-a934-74d5e197d646\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " Dec 03 13:56:54.411578 master-0 kubenswrapper[8988]: I1203 13:56:54.411514 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock\") pod \"ee6150c4-22d1-465b-a934-74d5e197d646\" (UID: \"ee6150c4-22d1-465b-a934-74d5e197d646\") " Dec 03 13:56:54.412876 master-0 kubenswrapper[8988]: I1203 13:56:54.411741 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock" (OuterVolumeSpecName: "var-lock") pod "ee6150c4-22d1-465b-a934-74d5e197d646" (UID: "ee6150c4-22d1-465b-a934-74d5e197d646"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:56:54.412876 master-0 kubenswrapper[8988]: I1203 13:56:54.411999 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:54.412876 master-0 kubenswrapper[8988]: I1203 13:56:54.412039 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ee6150c4-22d1-465b-a934-74d5e197d646-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:54.417207 master-0 kubenswrapper[8988]: I1203 13:56:54.417144 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ee6150c4-22d1-465b-a934-74d5e197d646" (UID: "ee6150c4-22d1-465b-a934-74d5e197d646"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:56:54.513219 master-0 kubenswrapper[8988]: I1203 13:56:54.513101 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ee6150c4-22d1-465b-a934-74d5e197d646-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:56:54.941118 master-0 kubenswrapper[8988]: I1203 13:56:54.941062 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ee6150c4-22d1-465b-a934-74d5e197d646/installer/0.log" Dec 03 13:56:54.942108 master-0 kubenswrapper[8988]: I1203 13:56:54.941253 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:56:54.943951 master-0 kubenswrapper[8988]: I1203 13:56:54.943913 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5f574c6c79-86bh9_5aa67ace-d03a-4d06-9fb5-24777b65f2cc/kube-scheduler-operator-container/1.log" Dec 03 13:56:55.153963 master-0 kubenswrapper[8988]: E1203 13:56:55.153821 8988 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:55.292398 master-0 kubenswrapper[8988]: E1203 13:56:55.291988 8988 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:56:58.971826 master-0 kubenswrapper[8988]: I1203 13:56:58.971732 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9477082b-005c-4ff5-812a-7c3230f60da2/installer/0.log" Dec 03 13:56:58.971826 master-0 kubenswrapper[8988]: I1203 13:56:58.971824 8988 generic.go:334] "Generic (PLEG): container finished" podID="9477082b-005c-4ff5-812a-7c3230f60da2" containerID="329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee" exitCode=1 Dec 03 13:56:59.881891 master-0 kubenswrapper[8988]: E1203 13:56:59.881782 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:00.987094 master-0 kubenswrapper[8988]: I1203 13:57:00.986992 8988 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92" exitCode=0 Dec 03 13:57:02.251414 master-0 kubenswrapper[8988]: I1203 13:57:02.251328 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": context deadline exceeded" start-of-body= Dec 03 13:57:02.252173 master-0 kubenswrapper[8988]: I1203 13:57:02.251450 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": context deadline exceeded" Dec 03 13:57:03.805113 master-0 kubenswrapper[8988]: I1203 13:57:03.804972 8988 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:05.154655 master-0 kubenswrapper[8988]: E1203 13:57:05.154459 8988 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:05.294009 master-0 kubenswrapper[8988]: E1203 13:57:05.293822 8988 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:12.252581 master-0 kubenswrapper[8988]: I1203 13:57:12.252397 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:57:12.252581 master-0 kubenswrapper[8988]: I1203 13:57:12.252541 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:13.993128 master-0 kubenswrapper[8988]: E1203 13:57:13.993055 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:15.155853 master-0 kubenswrapper[8988]: E1203 13:57:15.155743 8988 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:15.155853 master-0 kubenswrapper[8988]: E1203 13:57:15.155831 8988 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 13:57:15.295422 master-0 kubenswrapper[8988]: E1203 13:57:15.294447 8988 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:57:15.295422 master-0 kubenswrapper[8988]: I1203 13:57:15.294532 8988 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 13:57:16.091040 master-0 kubenswrapper[8988]: I1203 13:57:16.090974 8988 generic.go:334] "Generic (PLEG): container finished" podID="52100521-67e9-40c9-887c-eda6560f06e0" containerID="62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae" exitCode=0 Dec 03 13:57:16.093421 master-0 kubenswrapper[8988]: I1203 13:57:16.093390 8988 generic.go:334] "Generic (PLEG): container finished" podID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" containerID="97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c" exitCode=0 Dec 03 13:57:18.645673 master-0 kubenswrapper[8988]: I1203 13:57:18.645576 8988 status_manager.go:851] "Failed to get status for pod" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods package-server-manager-75b4d49d4c-h599p)" Dec 03 13:57:19.115787 master-0 kubenswrapper[8988]: I1203 13:57:19.115682 8988 generic.go:334] "Generic (PLEG): container finished" podID="1c562495-1290-4792-b4b2-639faa594ae2" containerID="f767adcff9a0e233cd5a0d89a9f43dff3fc735aa20c23293aa5dcee5ce476e89" exitCode=0 Dec 03 13:57:20.126453 master-0 kubenswrapper[8988]: I1203 13:57:20.126339 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/0.log" Dec 03 13:57:20.127823 master-0 kubenswrapper[8988]: I1203 13:57:20.126828 8988 generic.go:334] "Generic (PLEG): container finished" podID="da583723-b3ad-4a6f-b586-09b739bd7f8c" containerID="55c650b6735d1149a2afda93b8298292e086e4e3f1a7fa967236b4dd8824447e" exitCode=1 Dec 03 13:57:21.143721 master-0 kubenswrapper[8988]: I1203 13:57:21.143637 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": read tcp 10.128.0.2:51468->10.128.0.43:8443: read: connection reset by peer" start-of-body= Dec 03 13:57:21.143721 master-0 kubenswrapper[8988]: I1203 13:57:21.143712 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": read tcp 10.128.0.2:51468->10.128.0.43:8443: read: connection reset by peer" Dec 03 13:57:21.144838 master-0 kubenswrapper[8988]: I1203 13:57:21.144116 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Dec 03 13:57:21.144838 master-0 kubenswrapper[8988]: I1203 13:57:21.144144 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" Dec 03 13:57:22.142317 master-0 kubenswrapper[8988]: I1203 13:57:22.142223 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-57fd58bc7b-kktql_24dfafc9-86a9-450e-ac62-a871138106c0/oauth-apiserver/0.log" Dec 03 13:57:22.142941 master-0 kubenswrapper[8988]: I1203 13:57:22.142630 8988 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092" exitCode=1 Dec 03 13:57:22.249195 master-0 kubenswrapper[8988]: I1203 13:57:22.249114 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Dec 03 13:57:22.249195 master-0 kubenswrapper[8988]: I1203 13:57:22.249200 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" Dec 03 13:57:25.190408 master-0 kubenswrapper[8988]: I1203 13:57:25.190318 8988 generic.go:334] "Generic (PLEG): container finished" podID="b051ae27-7879-448d-b426-4dce76e29739" containerID="4edfa8a89bc0d5038266241047b9c2dea2c14e6566f232726960cf6811e895c0" exitCode=0 Dec 03 13:57:25.295708 master-0 kubenswrapper[8988]: E1203 13:57:25.295482 8988 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Dec 03 13:57:26.384178 master-0 kubenswrapper[8988]: E1203 13:57:26.383798 8988 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{multus-admission-controller-5bdcc987c4-x99xc.187db920904f1ffc openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:multus-admission-controller-5bdcc987c4-x99xc,UID:22673f47-9484-4eed-bbce-888588c754ed,APIVersion:v1,ResourceVersion:8518,FieldPath:spec.containers{multus-admission-controller},},Reason:Created,Message:Created container: multus-admission-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:56:18.510299132 +0000 UTC m=+79.698367465,LastTimestamp:2025-12-03 13:56:18.510299132 +0000 UTC m=+79.698367465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:57:27.057353 master-0 kubenswrapper[8988]: E1203 13:57:27.057284 8988 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Dec 03 13:57:27.057743 master-0 kubenswrapper[8988]: E1203 13:57:27.057572 8988 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.036s" Dec 03 13:57:27.057743 master-0 kubenswrapper[8988]: I1203 13:57:27.057631 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:57:27.057743 master-0 kubenswrapper[8988]: I1203 13:57:27.057727 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:57:27.059234 master-0 kubenswrapper[8988]: I1203 13:57:27.059175 8988 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 03 13:57:27.059370 master-0 kubenswrapper[8988]: I1203 13:57:27.059327 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" containerID="cri-o://bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c" gracePeriod=30 Dec 03 13:57:27.080403 master-0 kubenswrapper[8988]: I1203 13:57:27.079196 8988 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Dec 03 13:57:27.224146 master-0 kubenswrapper[8988]: I1203 13:57:27.222770 8988 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c" exitCode=2 Dec 03 13:57:27.249870 master-0 kubenswrapper[8988]: I1203 13:57:27.249739 8988 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" start-of-body= Dec 03 13:57:27.250181 master-0 kubenswrapper[8988]: I1203 13:57:27.249898 8988 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.128.0.43:8443/livez\": dial tcp 10.128.0.43:8443: connect: connection refused" Dec 03 13:57:27.962134 master-0 kubenswrapper[8988]: I1203 13:57:27.962058 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"ee6150c4-22d1-465b-a934-74d5e197d646","Type":"ContainerDied","Data":"a3e5841f6f6d8362456d4cf786f11e54bc8b9d3300e0bfe95ffe518785f2d7ae"} Dec 03 13:57:27.962134 master-0 kubenswrapper[8988]: I1203 13:57:27.962113 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e5841f6f6d8362456d4cf786f11e54bc8b9d3300e0bfe95ffe518785f2d7ae" Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962203 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9477082b-005c-4ff5-812a-7c3230f60da2","Type":"ContainerDied","Data":"329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962221 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962234 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962245 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962282 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962291 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962447 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962514 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerDied","Data":"62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962595 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerDied","Data":"97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962641 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerDied","Data":"f767adcff9a0e233cd5a0d89a9f43dff3fc735aa20c23293aa5dcee5ce476e89"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962658 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerDied","Data":"55c650b6735d1149a2afda93b8298292e086e4e3f1a7fa967236b4dd8824447e"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962797 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerDied","Data":"64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962876 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerDied","Data":"4edfa8a89bc0d5038266241047b9c2dea2c14e6566f232726960cf6811e895c0"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962895 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c"} Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.962928 8988 scope.go:117] "RemoveContainer" containerID="34c71ee91f38a33be1dd0b9077e78348e7634a7f7bd5a24409ec5e8b872d684d" Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.963129 8988 scope.go:117] "RemoveContainer" containerID="97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c" Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.963839 8988 scope.go:117] "RemoveContainer" containerID="62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae" Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.963998 8988 scope.go:117] "RemoveContainer" containerID="64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092" Dec 03 13:57:27.964229 master-0 kubenswrapper[8988]: I1203 13:57:27.964213 8988 scope.go:117] "RemoveContainer" containerID="b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760" Dec 03 13:57:27.965288 master-0 kubenswrapper[8988]: I1203 13:57:27.964908 8988 scope.go:117] "RemoveContainer" containerID="f767adcff9a0e233cd5a0d89a9f43dff3fc735aa20c23293aa5dcee5ce476e89" Dec 03 13:57:27.966676 master-0 kubenswrapper[8988]: I1203 13:57:27.966636 8988 scope.go:117] "RemoveContainer" containerID="55c650b6735d1149a2afda93b8298292e086e4e3f1a7fa967236b4dd8824447e" Dec 03 13:57:27.967190 master-0 kubenswrapper[8988]: I1203 13:57:27.967138 8988 scope.go:117] "RemoveContainer" containerID="4edfa8a89bc0d5038266241047b9c2dea2c14e6566f232726960cf6811e895c0" Dec 03 13:57:27.983574 master-0 kubenswrapper[8988]: I1203 13:57:27.983516 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Dec 03 13:57:27.983574 master-0 kubenswrapper[8988]: I1203 13:57:27.983567 8988 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="b5ed49a0-8ae8-4c17-8a14-e6bf9be0118b" Dec 03 13:57:27.985887 master-0 kubenswrapper[8988]: I1203 13:57:27.985841 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Dec 03 13:57:27.985887 master-0 kubenswrapper[8988]: I1203 13:57:27.985869 8988 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="b5ed49a0-8ae8-4c17-8a14-e6bf9be0118b" Dec 03 13:57:28.014096 master-0 kubenswrapper[8988]: I1203 13:57:28.014018 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podStartSLOduration=75.418033148 podStartE2EDuration="1m22.013982475s" podCreationTimestamp="2025-12-03 13:56:06 +0000 UTC" firstStartedPulling="2025-12-03 13:56:11.254423886 +0000 UTC m=+72.442492169" lastFinishedPulling="2025-12-03 13:56:17.850373213 +0000 UTC m=+79.038441496" observedRunningTime="2025-12-03 13:57:27.999437362 +0000 UTC m=+149.187505655" watchObservedRunningTime="2025-12-03 13:57:28.013982475 +0000 UTC m=+149.202050758" Dec 03 13:57:28.122345 master-0 kubenswrapper[8988]: I1203 13:57:28.121104 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podStartSLOduration=72.121071211 podStartE2EDuration="1m12.121071211s" podCreationTimestamp="2025-12-03 13:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:57:28.120580426 +0000 UTC m=+149.308648709" watchObservedRunningTime="2025-12-03 13:57:28.121071211 +0000 UTC m=+149.309139504" Dec 03 13:57:28.167152 master-0 kubenswrapper[8988]: I1203 13:57:28.166725 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:57:28.167152 master-0 kubenswrapper[8988]: I1203 13:57:28.166814 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Dec 03 13:57:28.242611 master-0 kubenswrapper[8988]: I1203 13:57:28.242542 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_725fa88d-f29d-4dee-bfba-6e1c4506f73c/installer/0.log" Dec 03 13:57:28.242739 master-0 kubenswrapper[8988]: I1203 13:57:28.242618 8988 generic.go:334] "Generic (PLEG): container finished" podID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerID="6f689de8a1f834cf175e7a94d531ac0bc5fbc598832080a29d70489ce59fa461" exitCode=1 Dec 03 13:57:28.242850 master-0 kubenswrapper[8988]: I1203 13:57:28.242727 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"725fa88d-f29d-4dee-bfba-6e1c4506f73c","Type":"ContainerDied","Data":"6f689de8a1f834cf175e7a94d531ac0bc5fbc598832080a29d70489ce59fa461"} Dec 03 13:57:28.249119 master-0 kubenswrapper[8988]: I1203 13:57:28.249033 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b"} Dec 03 13:57:28.568693 master-0 kubenswrapper[8988]: I1203 13:57:28.568637 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9477082b-005c-4ff5-812a-7c3230f60da2/installer/0.log" Dec 03 13:57:28.568972 master-0 kubenswrapper[8988]: I1203 13:57:28.568719 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:57:28.688754 master-0 kubenswrapper[8988]: I1203 13:57:28.688496 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir\") pod \"9477082b-005c-4ff5-812a-7c3230f60da2\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " Dec 03 13:57:28.688754 master-0 kubenswrapper[8988]: I1203 13:57:28.688584 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access\") pod \"9477082b-005c-4ff5-812a-7c3230f60da2\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " Dec 03 13:57:28.688754 master-0 kubenswrapper[8988]: I1203 13:57:28.688623 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock\") pod \"9477082b-005c-4ff5-812a-7c3230f60da2\" (UID: \"9477082b-005c-4ff5-812a-7c3230f60da2\") " Dec 03 13:57:28.688754 master-0 kubenswrapper[8988]: I1203 13:57:28.688658 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9477082b-005c-4ff5-812a-7c3230f60da2" (UID: "9477082b-005c-4ff5-812a-7c3230f60da2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:57:28.689123 master-0 kubenswrapper[8988]: I1203 13:57:28.688793 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock" (OuterVolumeSpecName: "var-lock") pod "9477082b-005c-4ff5-812a-7c3230f60da2" (UID: "9477082b-005c-4ff5-812a-7c3230f60da2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:57:28.689123 master-0 kubenswrapper[8988]: I1203 13:57:28.688899 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:28.689123 master-0 kubenswrapper[8988]: I1203 13:57:28.688924 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9477082b-005c-4ff5-812a-7c3230f60da2-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:28.706747 master-0 kubenswrapper[8988]: I1203 13:57:28.706652 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9477082b-005c-4ff5-812a-7c3230f60da2" (UID: "9477082b-005c-4ff5-812a-7c3230f60da2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:57:28.789760 master-0 kubenswrapper[8988]: I1203 13:57:28.789602 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9477082b-005c-4ff5-812a-7c3230f60da2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:29.029786 master-0 kubenswrapper[8988]: I1203 13:57:29.029726 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" path="/var/lib/kubelet/pods/0056326a-6d5e-47cd-b350-c5c6287cfe2a/volumes" Dec 03 13:57:29.258214 master-0 kubenswrapper[8988]: I1203 13:57:29.258147 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/0.log" Dec 03 13:57:29.258956 master-0 kubenswrapper[8988]: I1203 13:57:29.258895 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"a9ab7af0c7f5cba028485cf75bcb8b2472f6f17e4fd93b6731c12213f34fc92b"} Dec 03 13:57:29.264511 master-0 kubenswrapper[8988]: I1203 13:57:29.264195 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"ffe62d045336e3899059342c59af7fb1f994435190c657ef45defb2a59be314b"} Dec 03 13:57:29.267065 master-0 kubenswrapper[8988]: I1203 13:57:29.266988 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"dead9648b6db50ab9ffadeb0ded4ac60b4b62fa9651afaff45090595f1cc6b7d"} Dec 03 13:57:29.268690 master-0 kubenswrapper[8988]: I1203 13:57:29.268623 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9477082b-005c-4ff5-812a-7c3230f60da2/installer/0.log" Dec 03 13:57:29.268850 master-0 kubenswrapper[8988]: I1203 13:57:29.268782 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:57:29.269006 master-0 kubenswrapper[8988]: I1203 13:57:29.268787 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"9477082b-005c-4ff5-812a-7c3230f60da2","Type":"ContainerDied","Data":"3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305"} Dec 03 13:57:29.269006 master-0 kubenswrapper[8988]: I1203 13:57:29.268967 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305" Dec 03 13:57:29.271593 master-0 kubenswrapper[8988]: I1203 13:57:29.271542 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"566f323c45d81781fedd2bdc80905670d4cd7c9f187134067cb868a4c67c719d"} Dec 03 13:57:29.274221 master-0 kubenswrapper[8988]: I1203 13:57:29.274175 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-57fd58bc7b-kktql_24dfafc9-86a9-450e-ac62-a871138106c0/oauth-apiserver/0.log" Dec 03 13:57:29.274733 master-0 kubenswrapper[8988]: I1203 13:57:29.274683 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"ddcd1f07c9e39bdd0ed4b675613ff5933dfd84abbd356e34bbea01162fe4cd82"} Dec 03 13:57:29.277105 master-0 kubenswrapper[8988]: I1203 13:57:29.277063 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5f574c6c79-86bh9_5aa67ace-d03a-4d06-9fb5-24777b65f2cc/kube-scheduler-operator-container/1.log" Dec 03 13:57:29.277229 master-0 kubenswrapper[8988]: I1203 13:57:29.277152 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"80f97a02098439f02b003ce61a430fa01c2b4d045caf22ae6951bd0346274eae"} Dec 03 13:57:29.279206 master-0 kubenswrapper[8988]: I1203 13:57:29.279159 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"d59cdedb5194c6940e4cccb82c7b05bdcd7e1bcbc39bc385216aa8a7a9d70f09"} Dec 03 13:57:29.283196 master-0 kubenswrapper[8988]: I1203 13:57:29.283074 8988 generic.go:334] "Generic (PLEG): container finished" podID="918ff36b-662f-46ae-b71a-301df7e67735" containerID="260c925573f93c0439722d8810ce6c195e1dc2d279cb295c92ace13d1222474e" exitCode=0 Dec 03 13:57:29.283196 master-0 kubenswrapper[8988]: I1203 13:57:29.283136 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerDied","Data":"260c925573f93c0439722d8810ce6c195e1dc2d279cb295c92ace13d1222474e"} Dec 03 13:57:29.393069 master-0 kubenswrapper[8988]: I1203 13:57:29.391175 8988 scope.go:117] "RemoveContainer" containerID="260c925573f93c0439722d8810ce6c195e1dc2d279cb295c92ace13d1222474e" Dec 03 13:57:29.696362 master-0 kubenswrapper[8988]: I1203 13:57:29.696315 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_725fa88d-f29d-4dee-bfba-6e1c4506f73c/installer/0.log" Dec 03 13:57:29.696799 master-0 kubenswrapper[8988]: I1203 13:57:29.696781 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:57:29.803012 master-0 kubenswrapper[8988]: I1203 13:57:29.802914 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir\") pod \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " Dec 03 13:57:29.803390 master-0 kubenswrapper[8988]: I1203 13:57:29.803046 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock\") pod \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " Dec 03 13:57:29.803390 master-0 kubenswrapper[8988]: I1203 13:57:29.803157 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "725fa88d-f29d-4dee-bfba-6e1c4506f73c" (UID: "725fa88d-f29d-4dee-bfba-6e1c4506f73c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:57:29.803390 master-0 kubenswrapper[8988]: I1203 13:57:29.803194 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access\") pod \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\" (UID: \"725fa88d-f29d-4dee-bfba-6e1c4506f73c\") " Dec 03 13:57:29.803390 master-0 kubenswrapper[8988]: I1203 13:57:29.803226 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock" (OuterVolumeSpecName: "var-lock") pod "725fa88d-f29d-4dee-bfba-6e1c4506f73c" (UID: "725fa88d-f29d-4dee-bfba-6e1c4506f73c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:57:29.803737 master-0 kubenswrapper[8988]: I1203 13:57:29.803700 8988 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:29.803737 master-0 kubenswrapper[8988]: I1203 13:57:29.803725 8988 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/725fa88d-f29d-4dee-bfba-6e1c4506f73c-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:29.807144 master-0 kubenswrapper[8988]: I1203 13:57:29.807064 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "725fa88d-f29d-4dee-bfba-6e1c4506f73c" (UID: "725fa88d-f29d-4dee-bfba-6e1c4506f73c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:57:29.904605 master-0 kubenswrapper[8988]: I1203 13:57:29.904240 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/725fa88d-f29d-4dee-bfba-6e1c4506f73c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:57:30.291658 master-0 kubenswrapper[8988]: I1203 13:57:30.291581 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_725fa88d-f29d-4dee-bfba-6e1c4506f73c/installer/0.log" Dec 03 13:57:30.292309 master-0 kubenswrapper[8988]: I1203 13:57:30.291757 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:57:30.292309 master-0 kubenswrapper[8988]: I1203 13:57:30.291765 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"725fa88d-f29d-4dee-bfba-6e1c4506f73c","Type":"ContainerDied","Data":"d042c55327fe42c18032feedcbcb89d5a0275f42d648331d12b63fb7f5eab7f6"} Dec 03 13:57:30.292309 master-0 kubenswrapper[8988]: I1203 13:57:30.291886 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d042c55327fe42c18032feedcbcb89d5a0275f42d648331d12b63fb7f5eab7f6" Dec 03 13:57:30.295027 master-0 kubenswrapper[8988]: I1203 13:57:30.294969 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"a36aab91e8ccfa2e82298bdb3a57775a11a138a2333885fd5cad7b8691727ab9"} Dec 03 13:57:30.436938 master-0 kubenswrapper[8988]: I1203 13:57:30.436856 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:57:30.804591 master-0 kubenswrapper[8988]: I1203 13:57:30.804496 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:57:30.809593 master-0 kubenswrapper[8988]: I1203 13:57:30.809528 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:57:31.430820 master-0 kubenswrapper[8988]: I1203 13:57:31.430735 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:31.430820 master-0 kubenswrapper[8988]: I1203 13:57:31.430828 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:31.464221 master-0 kubenswrapper[8988]: I1203 13:57:31.464115 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:32.204545 master-0 kubenswrapper[8988]: I1203 13:57:32.204460 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Dec 03 13:57:32.247884 master-0 kubenswrapper[8988]: I1203 13:57:32.247829 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:57:32.248580 master-0 kubenswrapper[8988]: I1203 13:57:32.248469 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:57:32.255820 master-0 kubenswrapper[8988]: I1203 13:57:32.255745 8988 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:57:32.278582 master-0 kubenswrapper[8988]: I1203 13:57:32.278474 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.278431792 podStartE2EDuration="278.431792ms" podCreationTimestamp="2025-12-03 13:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:57:32.276059183 +0000 UTC m=+153.464127466" watchObservedRunningTime="2025-12-03 13:57:32.278431792 +0000 UTC m=+153.466500075" Dec 03 13:57:32.322856 master-0 kubenswrapper[8988]: I1203 13:57:32.322756 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:57:32.327293 master-0 kubenswrapper[8988]: I1203 13:57:32.326666 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:32.330169 master-0 kubenswrapper[8988]: E1203 13:57:32.330103 8988 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 13:57:39.352291 master-0 kubenswrapper[8988]: I1203 13:57:39.352011 8988 patch_prober.go:28] interesting pod/etcd-operator-7978bf889c-n64v4 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Dec 03 13:57:39.353101 master-0 kubenswrapper[8988]: I1203 13:57:39.352105 8988 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Dec 03 13:57:40.442562 master-0 kubenswrapper[8988]: I1203 13:57:40.442481 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:57:49.182384 master-0 kubenswrapper[8988]: I1203 13:57:49.182305 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: E1203 13:57:49.182633 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.182656 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: E1203 13:57:49.182722 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.182734 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: E1203 13:57:49.182751 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.182787 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: E1203 13:57:49.182808 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.182817 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: E1203 13:57:49.182834 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.182874 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183020 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183047 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183134 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="0056326a-6d5e-47cd-b350-c5c6287cfe2a" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183155 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183176 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 13:57:49.183926 master-0 kubenswrapper[8988]: I1203 13:57:49.183879 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.187401 master-0 kubenswrapper[8988]: I1203 13:57:49.187303 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 13:57:49.191247 master-0 kubenswrapper[8988]: I1203 13:57:49.191192 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 13:57:49.191964 master-0 kubenswrapper[8988]: I1203 13:57:49.191906 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Dec 03 13:57:49.313421 master-0 kubenswrapper[8988]: I1203 13:57:49.313367 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.313421 master-0 kubenswrapper[8988]: I1203 13:57:49.313424 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.313421 master-0 kubenswrapper[8988]: I1203 13:57:49.313442 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.415151 master-0 kubenswrapper[8988]: I1203 13:57:49.414981 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.415151 master-0 kubenswrapper[8988]: I1203 13:57:49.415055 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.415151 master-0 kubenswrapper[8988]: I1203 13:57:49.415142 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.415574 master-0 kubenswrapper[8988]: I1203 13:57:49.415475 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.415639 master-0 kubenswrapper[8988]: I1203 13:57:49.415497 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.441989 master-0 kubenswrapper[8988]: I1203 13:57:49.441753 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:49.514613 master-0 kubenswrapper[8988]: I1203 13:57:49.514515 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:57:50.315305 master-0 kubenswrapper[8988]: I1203 13:57:50.315179 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Dec 03 13:57:50.335656 master-0 kubenswrapper[8988]: W1203 13:57:50.335571 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod50f28c77_b15c_4b86_93c8_221c0cc82bb2.slice/crio-0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4 WatchSource:0}: Error finding container 0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4: Status 404 returned error can't find the container with id 0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4 Dec 03 13:57:50.608775 master-0 kubenswrapper[8988]: I1203 13:57:50.608688 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"50f28c77-b15c-4b86-93c8-221c0cc82bb2","Type":"ContainerStarted","Data":"0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4"} Dec 03 13:57:51.618103 master-0 kubenswrapper[8988]: I1203 13:57:51.618038 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"50f28c77-b15c-4b86-93c8-221c0cc82bb2","Type":"ContainerStarted","Data":"efe5b98b8193b6c315bd2fdafc1dfa799f114179992474177c6e7d697c70abb2"} Dec 03 13:57:57.910674 master-0 kubenswrapper[8988]: I1203 13:57:57.910530 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=8.910493269 podStartE2EDuration="8.910493269s" podCreationTimestamp="2025-12-03 13:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:57:51.636845329 +0000 UTC m=+172.824913622" watchObservedRunningTime="2025-12-03 13:57:57.910493269 +0000 UTC m=+179.098561552" Dec 03 13:57:57.913164 master-0 kubenswrapper[8988]: I1203 13:57:57.913125 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd"] Dec 03 13:57:57.914138 master-0 kubenswrapper[8988]: I1203 13:57:57.914070 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:57.924834 master-0 kubenswrapper[8988]: I1203 13:57:57.924758 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh"] Dec 03 13:57:57.924834 master-0 kubenswrapper[8988]: I1203 13:57:57.924802 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 13:57:57.925337 master-0 kubenswrapper[8988]: I1203 13:57:57.925309 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 13:57:57.925392 master-0 kubenswrapper[8988]: I1203 13:57:57.925344 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 13:57:57.925512 master-0 kubenswrapper[8988]: I1203 13:57:57.925489 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 13:57:57.925766 master-0 kubenswrapper[8988]: I1203 13:57:57.925743 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 13:57:57.925968 master-0 kubenswrapper[8988]: I1203 13:57:57.925935 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h"] Dec 03 13:57:57.926229 master-0 kubenswrapper[8988]: I1203 13:57:57.926206 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 13:57:57.926557 master-0 kubenswrapper[8988]: I1203 13:57:57.926532 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:57.927032 master-0 kubenswrapper[8988]: I1203 13:57:57.927000 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:57.952369 master-0 kubenswrapper[8988]: I1203 13:57:57.952319 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 13:57:57.957855 master-0 kubenswrapper[8988]: I1203 13:57:57.957792 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 13:57:57.961576 master-0 kubenswrapper[8988]: I1203 13:57:57.961529 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 13:57:57.962201 master-0 kubenswrapper[8988]: I1203 13:57:57.962158 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:57:57.962201 master-0 kubenswrapper[8988]: I1203 13:57:57.962196 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 13:57:57.962573 master-0 kubenswrapper[8988]: I1203 13:57:57.962547 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 13:57:57.962700 master-0 kubenswrapper[8988]: I1203 13:57:57.962674 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:57:57.962992 master-0 kubenswrapper[8988]: I1203 13:57:57.962957 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 13:57:57.963741 master-0 kubenswrapper[8988]: I1203 13:57:57.963720 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 13:57:57.974210 master-0 kubenswrapper[8988]: I1203 13:57:57.974175 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 13:57:58.053036 master-0 kubenswrapper[8988]: I1203 13:57:58.052843 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8"] Dec 03 13:57:58.053618 master-0 kubenswrapper[8988]: I1203 13:57:58.053594 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w"] Dec 03 13:57:58.054050 master-0 kubenswrapper[8988]: I1203 13:57:58.054020 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.054563 master-0 kubenswrapper[8988]: I1203 13:57:58.054538 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.059095 master-0 kubenswrapper[8988]: I1203 13:57:58.059021 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h"] Dec 03 13:57:58.062186 master-0 kubenswrapper[8988]: I1203 13:57:58.062133 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 13:57:58.062345 master-0 kubenswrapper[8988]: I1203 13:57:58.062298 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 13:57:58.062493 master-0 kubenswrapper[8988]: I1203 13:57:58.062447 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 13:57:58.062693 master-0 kubenswrapper[8988]: I1203 13:57:58.062639 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 13:57:58.062951 master-0 kubenswrapper[8988]: I1203 13:57:58.062923 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 13:57:58.070457 master-0 kubenswrapper[8988]: I1203 13:57:58.070407 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 13:57:58.071025 master-0 kubenswrapper[8988]: I1203 13:57:58.070997 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088381 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088495 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088587 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088640 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088727 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.088792 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.089398 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.089461 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.089562 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.089616 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.089659 master-0 kubenswrapper[8988]: I1203 13:57:58.089634 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.094281 master-0 kubenswrapper[8988]: I1203 13:57:58.092581 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8"] Dec 03 13:57:58.106345 master-0 kubenswrapper[8988]: I1203 13:57:58.104048 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-59d99f9b7b-74sss"] Dec 03 13:57:58.106345 master-0 kubenswrapper[8988]: I1203 13:57:58.104853 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.120298 master-0 kubenswrapper[8988]: I1203 13:57:58.112037 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 13:57:58.120298 master-0 kubenswrapper[8988]: I1203 13:57:58.112329 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 13:57:58.120298 master-0 kubenswrapper[8988]: I1203 13:57:58.112518 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 13:57:58.120298 master-0 kubenswrapper[8988]: I1203 13:57:58.119677 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 13:57:58.120298 master-0 kubenswrapper[8988]: I1203 13:57:58.119932 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 13:57:58.137093 master-0 kubenswrapper[8988]: I1203 13:57:58.137016 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 13:57:58.148375 master-0 kubenswrapper[8988]: I1203 13:57:58.148147 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w"] Dec 03 13:57:58.153891 master-0 kubenswrapper[8988]: I1203 13:57:58.153828 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59d99f9b7b-74sss"] Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.195490 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29"] Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.196948 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197035 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197073 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197099 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197118 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197146 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197177 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197208 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197235 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197281 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197311 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197334 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197361 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197394 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197424 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197455 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.201290 master-0 kubenswrapper[8988]: I1203 13:57:58.197826 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.205201 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.205373 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.206294 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.206365 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.207517 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.212165 master-0 kubenswrapper[8988]: I1203 13:57:58.209920 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.212661 master-0 kubenswrapper[8988]: I1203 13:57:58.212613 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.213997 master-0 kubenswrapper[8988]: I1203 13:57:58.213956 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j"] Dec 03 13:57:58.216342 master-0 kubenswrapper[8988]: I1203 13:57:58.214215 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.216342 master-0 kubenswrapper[8988]: I1203 13:57:58.215223 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.226222 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.226586 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.226725 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.226851 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227067 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227197 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227410 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227528 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227645 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227762 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.227866 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n"] Dec 03 13:57:58.229279 master-0 kubenswrapper[8988]: I1203 13:57:58.228727 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.236627 master-0 kubenswrapper[8988]: I1203 13:57:58.235583 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 13:57:58.236627 master-0 kubenswrapper[8988]: I1203 13:57:58.235834 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 13:57:58.238531 master-0 kubenswrapper[8988]: I1203 13:57:58.238500 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-7486ff55f-wcnxg"] Dec 03 13:57:58.239726 master-0 kubenswrapper[8988]: I1203 13:57:58.239694 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.243432 master-0 kubenswrapper[8988]: I1203 13:57:58.241887 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 13:57:58.243432 master-0 kubenswrapper[8988]: I1203 13:57:58.242095 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 13:57:58.243432 master-0 kubenswrapper[8988]: I1203 13:57:58.242246 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 13:57:58.247277 master-0 kubenswrapper[8988]: I1203 13:57:58.243584 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 13:57:58.301493 master-0 kubenswrapper[8988]: I1203 13:57:58.301338 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.301493 master-0 kubenswrapper[8988]: I1203 13:57:58.301441 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.301493 master-0 kubenswrapper[8988]: I1203 13:57:58.301475 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301514 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301548 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301575 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301606 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301638 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.301681 master-0 kubenswrapper[8988]: I1203 13:57:58.301666 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.301938 master-0 kubenswrapper[8988]: I1203 13:57:58.301700 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.305406 master-0 kubenswrapper[8988]: I1203 13:57:58.304196 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.310275 master-0 kubenswrapper[8988]: I1203 13:57:58.308676 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.310275 master-0 kubenswrapper[8988]: I1203 13:57:58.309151 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.394375 master-0 kubenswrapper[8988]: I1203 13:57:58.394306 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29"] Dec 03 13:57:58.394968 master-0 kubenswrapper[8988]: I1203 13:57:58.394917 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j"] Dec 03 13:57:58.399159 master-0 kubenswrapper[8988]: I1203 13:57:58.399123 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n"] Dec 03 13:57:58.399237 master-0 kubenswrapper[8988]: I1203 13:57:58.399165 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-7486ff55f-wcnxg"] Dec 03 13:57:58.403672 master-0 kubenswrapper[8988]: I1203 13:57:58.403611 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.403747 master-0 kubenswrapper[8988]: I1203 13:57:58.403699 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.403747 master-0 kubenswrapper[8988]: I1203 13:57:58.403744 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.403814 master-0 kubenswrapper[8988]: I1203 13:57:58.403772 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.403859 master-0 kubenswrapper[8988]: I1203 13:57:58.403808 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.403859 master-0 kubenswrapper[8988]: I1203 13:57:58.403839 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.403954 master-0 kubenswrapper[8988]: I1203 13:57:58.403914 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.403994 master-0 kubenswrapper[8988]: I1203 13:57:58.403964 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.404066 master-0 kubenswrapper[8988]: I1203 13:57:58.403997 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.404104 master-0 kubenswrapper[8988]: I1203 13:57:58.404077 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.404150 master-0 kubenswrapper[8988]: I1203 13:57:58.404122 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.404185 master-0 kubenswrapper[8988]: I1203 13:57:58.404157 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.404233 master-0 kubenswrapper[8988]: I1203 13:57:58.404220 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.404284 master-0 kubenswrapper[8988]: I1203 13:57:58.404247 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.404323 master-0 kubenswrapper[8988]: I1203 13:57:58.404309 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.404395 master-0 kubenswrapper[8988]: I1203 13:57:58.404373 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.404433 master-0 kubenswrapper[8988]: I1203 13:57:58.404417 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.404472 master-0 kubenswrapper[8988]: I1203 13:57:58.404448 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.404507 master-0 kubenswrapper[8988]: I1203 13:57:58.404476 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.406068 master-0 kubenswrapper[8988]: I1203 13:57:58.406032 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.407572 master-0 kubenswrapper[8988]: I1203 13:57:58.407528 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.408140 master-0 kubenswrapper[8988]: I1203 13:57:58.408083 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.413044 master-0 kubenswrapper[8988]: I1203 13:57:58.412754 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.451926 master-0 kubenswrapper[8988]: I1203 13:57:58.451842 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:57:58.452490 master-0 kubenswrapper[8988]: I1203 13:57:58.452328 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="multus-admission-controller" containerID="cri-o://bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881" gracePeriod=30 Dec 03 13:57:58.453000 master-0 kubenswrapper[8988]: I1203 13:57:58.452641 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="kube-rbac-proxy" containerID="cri-o://4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc" gracePeriod=30 Dec 03 13:57:58.482303 master-0 kubenswrapper[8988]: I1203 13:57:58.476054 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.482303 master-0 kubenswrapper[8988]: I1203 13:57:58.479148 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.482303 master-0 kubenswrapper[8988]: I1203 13:57:58.479821 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4"] Dec 03 13:57:58.482303 master-0 kubenswrapper[8988]: I1203 13:57:58.480916 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.487476 master-0 kubenswrapper[8988]: I1203 13:57:58.486848 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.487545 master-0 kubenswrapper[8988]: I1203 13:57:58.487481 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 13:57:58.489631 master-0 kubenswrapper[8988]: I1203 13:57:58.488708 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 13:57:58.489631 master-0 kubenswrapper[8988]: I1203 13:57:58.489027 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 13:57:58.489631 master-0 kubenswrapper[8988]: I1203 13:57:58.489290 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 13:57:58.494771 master-0 kubenswrapper[8988]: I1203 13:57:58.494441 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.498680 master-0 kubenswrapper[8988]: I1203 13:57:58.496511 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 13:57:58.498680 master-0 kubenswrapper[8988]: I1203 13:57:58.497355 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:57:58.512387 master-0 kubenswrapper[8988]: I1203 13:57:58.512234 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 13:57:58.514027 master-0 kubenswrapper[8988]: I1203 13:57:58.513973 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.514112 master-0 kubenswrapper[8988]: I1203 13:57:58.514041 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.514112 master-0 kubenswrapper[8988]: I1203 13:57:58.514068 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.514112 master-0 kubenswrapper[8988]: I1203 13:57:58.514089 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.514269 master-0 kubenswrapper[8988]: I1203 13:57:58.514123 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.514269 master-0 kubenswrapper[8988]: I1203 13:57:58.514143 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.514269 master-0 kubenswrapper[8988]: I1203 13:57:58.514165 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.514269 master-0 kubenswrapper[8988]: I1203 13:57:58.514186 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516493 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516564 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516591 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516662 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516705 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.521417 master-0 kubenswrapper[8988]: I1203 13:57:58.516746 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.527626 master-0 kubenswrapper[8988]: I1203 13:57:58.526794 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.527626 master-0 kubenswrapper[8988]: I1203 13:57:58.527526 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.528230 master-0 kubenswrapper[8988]: I1203 13:57:58.528185 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l"] Dec 03 13:57:58.530275 master-0 kubenswrapper[8988]: I1203 13:57:58.530113 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.531149 master-0 kubenswrapper[8988]: I1203 13:57:58.530547 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.534593 master-0 kubenswrapper[8988]: I1203 13:57:58.531212 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.534593 master-0 kubenswrapper[8988]: I1203 13:57:58.532701 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.534593 master-0 kubenswrapper[8988]: I1203 13:57:58.533286 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.534593 master-0 kubenswrapper[8988]: I1203 13:57:58.533351 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.534593 master-0 kubenswrapper[8988]: I1203 13:57:58.534455 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 13:57:58.536275 master-0 kubenswrapper[8988]: I1203 13:57:58.535022 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 13:57:58.536275 master-0 kubenswrapper[8988]: I1203 13:57:58.535181 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 13:57:58.536275 master-0 kubenswrapper[8988]: I1203 13:57:58.535626 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.537499 master-0 kubenswrapper[8988]: I1203 13:57:58.537178 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.538070 master-0 kubenswrapper[8988]: I1203 13:57:58.538042 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.543633 master-0 kubenswrapper[8988]: I1203 13:57:58.539094 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:57:58.547301 master-0 kubenswrapper[8988]: I1203 13:57:58.546562 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.550230 master-0 kubenswrapper[8988]: I1203 13:57:58.550164 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.550345 master-0 kubenswrapper[8988]: I1203 13:57:58.550279 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg"] Dec 03 13:57:58.551291 master-0 kubenswrapper[8988]: I1203 13:57:58.551224 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.553882 master-0 kubenswrapper[8988]: I1203 13:57:58.553841 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 13:57:58.557751 master-0 kubenswrapper[8988]: I1203 13:57:58.557612 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4"] Dec 03 13:57:58.561283 master-0 kubenswrapper[8988]: I1203 13:57:58.560894 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:57:58.562106 master-0 kubenswrapper[8988]: I1203 13:57:58.561549 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l"] Dec 03 13:57:58.562106 master-0 kubenswrapper[8988]: I1203 13:57:58.561603 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg"] Dec 03 13:57:58.566570 master-0 kubenswrapper[8988]: I1203 13:57:58.566520 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.570520 master-0 kubenswrapper[8988]: I1203 13:57:58.570483 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:57:58.573871 master-0 kubenswrapper[8988]: I1203 13:57:58.573843 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.576984 master-0 kubenswrapper[8988]: W1203 13:57:58.576925 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f41c3e_713b_4532_8534_ceefb7f519bf.slice/crio-4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97 WatchSource:0}: Error finding container 4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97: Status 404 returned error can't find the container with id 4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97 Dec 03 13:57:58.617968 master-0 kubenswrapper[8988]: I1203 13:57:58.617796 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.617968 master-0 kubenswrapper[8988]: I1203 13:57:58.617848 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.617968 master-0 kubenswrapper[8988]: I1203 13:57:58.617879 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.617968 master-0 kubenswrapper[8988]: I1203 13:57:58.617913 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.618172 master-0 kubenswrapper[8988]: I1203 13:57:58.618043 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.618368 master-0 kubenswrapper[8988]: I1203 13:57:58.618309 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.618460 master-0 kubenswrapper[8988]: I1203 13:57:58.618435 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.621959 master-0 kubenswrapper[8988]: W1203 13:57:58.621873 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85820c13_e5cf_4af1_bd1c_dd74ea151cac.slice/crio-5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2 WatchSource:0}: Error finding container 5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2: Status 404 returned error can't find the container with id 5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2 Dec 03 13:57:58.626645 master-0 kubenswrapper[8988]: I1203 13:57:58.626122 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:57:58.644942 master-0 kubenswrapper[8988]: I1203 13:57:58.644891 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:57:58.673153 master-0 kubenswrapper[8988]: I1203 13:57:58.673071 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerStarted","Data":"4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97"} Dec 03 13:57:58.674540 master-0 kubenswrapper[8988]: I1203 13:57:58.674501 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerStarted","Data":"5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2"} Dec 03 13:57:58.678977 master-0 kubenswrapper[8988]: I1203 13:57:58.678656 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:57:58.685948 master-0 kubenswrapper[8988]: I1203 13:57:58.685543 8988 generic.go:334] "Generic (PLEG): container finished" podID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerID="4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc" exitCode=0 Dec 03 13:57:58.685948 master-0 kubenswrapper[8988]: I1203 13:57:58.685673 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerDied","Data":"4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc"} Dec 03 13:57:58.693367 master-0 kubenswrapper[8988]: I1203 13:57:58.693310 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-85dbd94574-8jfp5_bcc78129-4a81-410e-9a42-b12043b5a75a/ingress-operator/0.log" Dec 03 13:57:58.694026 master-0 kubenswrapper[8988]: I1203 13:57:58.693974 8988 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91" exitCode=1 Dec 03 13:57:58.694073 master-0 kubenswrapper[8988]: I1203 13:57:58.694050 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerDied","Data":"69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91"} Dec 03 13:57:58.705321 master-0 kubenswrapper[8988]: I1203 13:57:58.705271 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:57:58.715683 master-0 kubenswrapper[8988]: I1203 13:57:58.715642 8988 scope.go:117] "RemoveContainer" containerID="69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91" Dec 03 13:57:58.719998 master-0 kubenswrapper[8988]: I1203 13:57:58.719955 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.720042 master-0 kubenswrapper[8988]: I1203 13:57:58.720016 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.720074 master-0 kubenswrapper[8988]: I1203 13:57:58.720047 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.720074 master-0 kubenswrapper[8988]: I1203 13:57:58.720067 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.720144 master-0 kubenswrapper[8988]: I1203 13:57:58.720093 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.720144 master-0 kubenswrapper[8988]: I1203 13:57:58.720112 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.720144 master-0 kubenswrapper[8988]: I1203 13:57:58.720133 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.720284 master-0 kubenswrapper[8988]: I1203 13:57:58.720159 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.720284 master-0 kubenswrapper[8988]: I1203 13:57:58.720193 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.720284 master-0 kubenswrapper[8988]: I1203 13:57:58.720230 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.721573 master-0 kubenswrapper[8988]: I1203 13:57:58.721543 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.722011 master-0 kubenswrapper[8988]: I1203 13:57:58.721980 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.725085 master-0 kubenswrapper[8988]: I1203 13:57:58.725044 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.727497 master-0 kubenswrapper[8988]: I1203 13:57:58.727439 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.729967 master-0 kubenswrapper[8988]: I1203 13:57:58.729912 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:58.827984 master-0 kubenswrapper[8988]: I1203 13:57:58.823004 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.827984 master-0 kubenswrapper[8988]: I1203 13:57:58.823110 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.827984 master-0 kubenswrapper[8988]: I1203 13:57:58.823141 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.833450 master-0 kubenswrapper[8988]: I1203 13:57:58.832543 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.840810 master-0 kubenswrapper[8988]: I1203 13:57:58.840771 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.922364 master-0 kubenswrapper[8988]: I1203 13:57:58.921786 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:58.929596 master-0 kubenswrapper[8988]: I1203 13:57:58.929057 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:58.929596 master-0 kubenswrapper[8988]: I1203 13:57:58.929463 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:58.951683 master-0 kubenswrapper[8988]: I1203 13:57:58.947622 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:58.954598 master-0 kubenswrapper[8988]: I1203 13:57:58.954508 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:59.038158 master-0 kubenswrapper[8988]: I1203 13:57:59.037505 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-59d99f9b7b-74sss"] Dec 03 13:57:59.049168 master-0 kubenswrapper[8988]: W1203 13:57:59.049107 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95705e3_17ef_40fe_89e8_22586a32621b.slice/crio-9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409 WatchSource:0}: Error finding container 9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409: Status 404 returned error can't find the container with id 9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409 Dec 03 13:57:59.176036 master-0 kubenswrapper[8988]: I1203 13:57:59.174932 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 13:57:59.176324 master-0 kubenswrapper[8988]: I1203 13:57:59.176278 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 13:57:59.182467 master-0 kubenswrapper[8988]: I1203 13:57:59.182407 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:57:59.183607 master-0 kubenswrapper[8988]: I1203 13:57:59.183556 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:57:59.193467 master-0 kubenswrapper[8988]: I1203 13:57:59.193411 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 13:57:59.195096 master-0 kubenswrapper[8988]: I1203 13:57:59.195068 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 13:57:59.201832 master-0 kubenswrapper[8988]: I1203 13:57:59.201795 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:57:59.201923 master-0 kubenswrapper[8988]: I1203 13:57:59.201863 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:57:59.204463 master-0 kubenswrapper[8988]: I1203 13:57:59.204432 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:57:59.341563 master-0 kubenswrapper[8988]: I1203 13:57:59.331707 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-7486ff55f-wcnxg"] Dec 03 13:57:59.341563 master-0 kubenswrapper[8988]: I1203 13:57:59.333716 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n"] Dec 03 13:57:59.352759 master-0 kubenswrapper[8988]: W1203 13:57:59.352693 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b95a5a6_db93_4a58_aaff_3619d130c8cb.slice/crio-b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83 WatchSource:0}: Error finding container b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83: Status 404 returned error can't find the container with id b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83 Dec 03 13:57:59.394456 master-0 kubenswrapper[8988]: I1203 13:57:59.394395 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w"] Dec 03 13:57:59.403063 master-0 kubenswrapper[8988]: I1203 13:57:59.402994 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h"] Dec 03 13:57:59.574021 master-0 kubenswrapper[8988]: I1203 13:57:59.573971 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8"] Dec 03 13:57:59.578911 master-0 kubenswrapper[8988]: W1203 13:57:59.578867 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeefee934_ac6b_44e3_a6be_1ae62362ab4f.slice/crio-b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b WatchSource:0}: Error finding container b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b: Status 404 returned error can't find the container with id b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b Dec 03 13:57:59.709922 master-0 kubenswrapper[8988]: I1203 13:57:59.709836 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerStarted","Data":"c446c9128a6cdcc1b5ae17378b359fd872223fb29994a04233b4de462e78ee58"} Dec 03 13:57:59.713044 master-0 kubenswrapper[8988]: I1203 13:57:59.711477 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"e90ae89b899f7c4adaa8b2c1c88e7171c1cb37b6c4cab4e7e1756faa4c54abf5"} Dec 03 13:57:59.713950 master-0 kubenswrapper[8988]: I1203 13:57:59.713906 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409"} Dec 03 13:57:59.714867 master-0 kubenswrapper[8988]: I1203 13:57:59.714829 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"fed7f1938d835b6bb43ab21dcc685c6a28234547c348fa1bb19896359af832d6"} Dec 03 13:57:59.714867 master-0 kubenswrapper[8988]: I1203 13:57:59.714856 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"06a6bfff8d933d9670b8e8e8de6cfda51fcb359ae53ec1c55a93a9738f4fc201"} Dec 03 13:57:59.718423 master-0 kubenswrapper[8988]: I1203 13:57:59.718376 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-85dbd94574-8jfp5_bcc78129-4a81-410e-9a42-b12043b5a75a/ingress-operator/0.log" Dec 03 13:57:59.718542 master-0 kubenswrapper[8988]: I1203 13:57:59.718509 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"5d702f79deb55550ea5730b11fd4eabda1b93c216210a04d377b3af5044f1982"} Dec 03 13:57:59.721010 master-0 kubenswrapper[8988]: I1203 13:57:59.720964 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"317e7dd62fa100db1d45ff57aba484e787374c6332b21d016a43057d248fc561"} Dec 03 13:57:59.723614 master-0 kubenswrapper[8988]: I1203 13:57:59.723569 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b"} Dec 03 13:57:59.726328 master-0 kubenswrapper[8988]: I1203 13:57:59.725609 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83"} Dec 03 13:58:00.229348 master-0 kubenswrapper[8988]: I1203 13:58:00.229147 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29"] Dec 03 13:58:00.234896 master-0 kubenswrapper[8988]: W1203 13:58:00.234814 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7663a25e_236d_4b1d_83ce_733ab146dee3.slice/crio-cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393 WatchSource:0}: Error finding container cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393: Status 404 returned error can't find the container with id cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393 Dec 03 13:58:00.296741 master-0 kubenswrapper[8988]: I1203 13:58:00.265669 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg"] Dec 03 13:58:00.296741 master-0 kubenswrapper[8988]: I1203 13:58:00.271079 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j"] Dec 03 13:58:00.297873 master-0 kubenswrapper[8988]: I1203 13:58:00.297816 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4"] Dec 03 13:58:00.384899 master-0 kubenswrapper[8988]: W1203 13:58:00.384834 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df2889c_99f7_402a_9d50_18ccf427179c.slice/crio-dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f WatchSource:0}: Error finding container dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f: Status 404 returned error can't find the container with id dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f Dec 03 13:58:00.742872 master-0 kubenswrapper[8988]: I1203 13:58:00.742698 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"8154ea1badaee93911650d5b6c9a0d50ee5f865cc92efee68e3e567a26fac336"} Dec 03 13:58:00.744837 master-0 kubenswrapper[8988]: I1203 13:58:00.744688 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f"} Dec 03 13:58:00.746354 master-0 kubenswrapper[8988]: I1203 13:58:00.746299 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"c0c28bf839b2e5e1bceddd001eae58acc3775691713261478c7742c6a0302aba"} Dec 03 13:58:00.748719 master-0 kubenswrapper[8988]: I1203 13:58:00.747555 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393"} Dec 03 13:58:00.753450 master-0 kubenswrapper[8988]: I1203 13:58:00.751640 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"286e240000163980fd7b266d4511288b3506ec5bbae38b7d39eb26613d430cda"} Dec 03 13:58:00.944197 master-0 kubenswrapper[8988]: I1203 13:58:00.942296 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l"] Dec 03 13:58:00.971936 master-0 kubenswrapper[8988]: W1203 13:58:00.970367 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b3c1fb_6f81_4067_98da_681d6c7c33e4.slice/crio-6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e WatchSource:0}: Error finding container 6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e: Status 404 returned error can't find the container with id 6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e Dec 03 13:58:01.763180 master-0 kubenswrapper[8988]: I1203 13:58:01.763084 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"079a77d7b6e77ececcba1148cb7a0f749581bf55fe762e3242f107e6b8f5cdca"} Dec 03 13:58:01.768382 master-0 kubenswrapper[8988]: I1203 13:58:01.768231 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"dcbe9987f77ff713c092b1bf8411528eede8d9b0b5e7047282320f1ad985745a"} Dec 03 13:58:01.768382 master-0 kubenswrapper[8988]: I1203 13:58:01.768365 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"9d9885192efaa088fe29d3228dd2d4225298754ffb0326c83d203b3ded8fe9b1"} Dec 03 13:58:01.771163 master-0 kubenswrapper[8988]: I1203 13:58:01.771106 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"e428ce79dd4eb6b57bb33b528a8e437f92a3e0d91130b0f0524a4aced7793334"} Dec 03 13:58:01.771270 master-0 kubenswrapper[8988]: I1203 13:58:01.771166 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e"} Dec 03 13:58:01.771977 master-0 kubenswrapper[8988]: I1203 13:58:01.771602 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:01.773712 master-0 kubenswrapper[8988]: I1203 13:58:01.773660 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"987691eed2494b5e7a9d7c407aeb51b5d0ff0a9c31c9a683dc5b41ae08c3b546"} Dec 03 13:58:01.774376 master-0 kubenswrapper[8988]: I1203 13:58:01.774035 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:01.776772 master-0 kubenswrapper[8988]: I1203 13:58:01.776691 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:01.778839 master-0 kubenswrapper[8988]: I1203 13:58:01.778792 8988 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:02.049657 master-0 kubenswrapper[8988]: I1203 13:58:02.049008 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podStartSLOduration=4.040954477 podStartE2EDuration="4.040954477s" podCreationTimestamp="2025-12-03 13:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:58:02.024096787 +0000 UTC m=+183.212165090" watchObservedRunningTime="2025-12-03 13:58:02.040954477 +0000 UTC m=+183.229022750" Dec 03 13:58:02.065120 master-0 kubenswrapper[8988]: I1203 13:58:02.064996 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podStartSLOduration=4.064961626 podStartE2EDuration="4.064961626s" podCreationTimestamp="2025-12-03 13:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:58:02.063462042 +0000 UTC m=+183.251530335" watchObservedRunningTime="2025-12-03 13:58:02.064961626 +0000 UTC m=+183.253029909" Dec 03 13:58:02.089821 master-0 kubenswrapper[8988]: I1203 13:58:02.087807 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Dec 03 13:58:02.092700 master-0 kubenswrapper[8988]: I1203 13:58:02.092634 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Dec 03 13:58:02.092862 master-0 kubenswrapper[8988]: I1203 13:58:02.092851 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.095209 master-0 kubenswrapper[8988]: I1203 13:58:02.095160 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7ctx2" Dec 03 13:58:02.095749 master-0 kubenswrapper[8988]: I1203 13:58:02.095718 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 13:58:02.119171 master-0 kubenswrapper[8988]: I1203 13:58:02.119040 8988 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podStartSLOduration=4.119007048 podStartE2EDuration="4.119007048s" podCreationTimestamp="2025-12-03 13:57:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:58:02.107508634 +0000 UTC m=+183.295576937" watchObservedRunningTime="2025-12-03 13:58:02.119007048 +0000 UTC m=+183.307075341" Dec 03 13:58:02.233889 master-0 kubenswrapper[8988]: I1203 13:58:02.233398 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.233889 master-0 kubenswrapper[8988]: I1203 13:58:02.233506 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.234376 master-0 kubenswrapper[8988]: I1203 13:58:02.234034 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.336278 master-0 kubenswrapper[8988]: I1203 13:58:02.336063 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.336278 master-0 kubenswrapper[8988]: I1203 13:58:02.336278 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.336620 master-0 kubenswrapper[8988]: I1203 13:58:02.336333 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.336977 master-0 kubenswrapper[8988]: I1203 13:58:02.336852 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.336977 master-0 kubenswrapper[8988]: I1203 13:58:02.336856 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.487030 master-0 kubenswrapper[8988]: I1203 13:58:02.486936 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.611587 master-0 kubenswrapper[8988]: I1203 13:58:02.611389 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t8rt7"] Dec 03 13:58:02.613610 master-0 kubenswrapper[8988]: I1203 13:58:02.613559 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.628484 master-0 kubenswrapper[8988]: I1203 13:58:02.627871 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 13:58:02.645386 master-0 kubenswrapper[8988]: I1203 13:58:02.642380 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.645386 master-0 kubenswrapper[8988]: I1203 13:58:02.642482 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.645386 master-0 kubenswrapper[8988]: I1203 13:58:02.642522 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.645386 master-0 kubenswrapper[8988]: I1203 13:58:02.642997 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8rt7"] Dec 03 13:58:02.675498 master-0 kubenswrapper[8988]: I1203 13:58:02.674500 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:58:02.690914 master-0 kubenswrapper[8988]: I1203 13:58:02.688747 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.692463 master-0 kubenswrapper[8988]: I1203 13:58:02.692146 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 13:58:02.713372 master-0 kubenswrapper[8988]: I1203 13:58:02.712186 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:58:02.733251 master-0 kubenswrapper[8988]: I1203 13:58:02.733147 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:02.745019 master-0 kubenswrapper[8988]: I1203 13:58:02.744948 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.745019 master-0 kubenswrapper[8988]: I1203 13:58:02.745024 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.745434 master-0 kubenswrapper[8988]: I1203 13:58:02.745075 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.745434 master-0 kubenswrapper[8988]: I1203 13:58:02.745107 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.745434 master-0 kubenswrapper[8988]: I1203 13:58:02.745130 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.745434 master-0 kubenswrapper[8988]: I1203 13:58:02.745371 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.746097 master-0 kubenswrapper[8988]: I1203 13:58:02.745976 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.746408 master-0 kubenswrapper[8988]: I1203 13:58:02.746306 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.767608 master-0 kubenswrapper[8988]: I1203 13:58:02.767505 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:02.828670 master-0 kubenswrapper[8988]: I1203 13:58:02.828603 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:58:02.829421 master-0 kubenswrapper[8988]: I1203 13:58:02.829381 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" podUID="728bc191-0639-49c8-a939-a81759bec820" containerName="controller-manager" containerID="cri-o://2101f07d21b09bef3562a60720d029d2f6c54a9dc924b95901f0ed03cda4f409" gracePeriod=30 Dec 03 13:58:02.853469 master-0 kubenswrapper[8988]: I1203 13:58:02.849862 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.853469 master-0 kubenswrapper[8988]: I1203 13:58:02.850086 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.853469 master-0 kubenswrapper[8988]: I1203 13:58:02.850174 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.857201 master-0 kubenswrapper[8988]: I1203 13:58:02.857051 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.857201 master-0 kubenswrapper[8988]: I1203 13:58:02.857056 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:02.984316 master-0 kubenswrapper[8988]: I1203 13:58:02.983576 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:03.097805 master-0 kubenswrapper[8988]: I1203 13:58:03.095079 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:58:03.097805 master-0 kubenswrapper[8988]: I1203 13:58:03.095522 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" podUID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" containerName="route-controller-manager" containerID="cri-o://55445c84ee27dfac14466bc7e8118d367fd0229276697cf9717729560bc34702" gracePeriod=30 Dec 03 13:58:03.158797 master-0 kubenswrapper[8988]: I1203 13:58:03.156833 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:03.346932 master-0 kubenswrapper[8988]: I1203 13:58:03.346851 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:03.794904 master-0 kubenswrapper[8988]: I1203 13:58:03.794640 8988 generic.go:334] "Generic (PLEG): container finished" podID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" containerID="55445c84ee27dfac14466bc7e8118d367fd0229276697cf9717729560bc34702" exitCode=0 Dec 03 13:58:03.794904 master-0 kubenswrapper[8988]: I1203 13:58:03.794814 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" event={"ID":"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6","Type":"ContainerDied","Data":"55445c84ee27dfac14466bc7e8118d367fd0229276697cf9717729560bc34702"} Dec 03 13:58:03.797750 master-0 kubenswrapper[8988]: I1203 13:58:03.797405 8988 generic.go:334] "Generic (PLEG): container finished" podID="728bc191-0639-49c8-a939-a81759bec820" containerID="2101f07d21b09bef3562a60720d029d2f6c54a9dc924b95901f0ed03cda4f409" exitCode=0 Dec 03 13:58:03.797750 master-0 kubenswrapper[8988]: I1203 13:58:03.797522 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" event={"ID":"728bc191-0639-49c8-a939-a81759bec820","Type":"ContainerDied","Data":"2101f07d21b09bef3562a60720d029d2f6c54a9dc924b95901f0ed03cda4f409"} Dec 03 13:58:04.041687 master-0 kubenswrapper[8988]: I1203 13:58:04.041594 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 13:58:04.042964 master-0 kubenswrapper[8988]: I1203 13:58:04.042919 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.072595 master-0 kubenswrapper[8988]: I1203 13:58:04.072535 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.072762 master-0 kubenswrapper[8988]: I1203 13:58:04.072656 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.072762 master-0 kubenswrapper[8988]: I1203 13:58:04.072687 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.174071 master-0 kubenswrapper[8988]: I1203 13:58:04.173993 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.174071 master-0 kubenswrapper[8988]: I1203 13:58:04.174074 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.174465 master-0 kubenswrapper[8988]: I1203 13:58:04.174143 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.174623 master-0 kubenswrapper[8988]: I1203 13:58:04.174590 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.174623 master-0 kubenswrapper[8988]: I1203 13:58:04.174610 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.381347 master-0 kubenswrapper[8988]: I1203 13:58:04.381153 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 13:58:04.387231 master-0 kubenswrapper[8988]: I1203 13:58:04.386877 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.669441 master-0 kubenswrapper[8988]: I1203 13:58:04.667449 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:04.864097 master-0 kubenswrapper[8988]: I1203 13:58:04.864000 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2ztl9"] Dec 03 13:58:04.865673 master-0 kubenswrapper[8988]: I1203 13:58:04.865646 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.870563 master-0 kubenswrapper[8988]: I1203 13:58:04.870486 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 13:58:04.889135 master-0 kubenswrapper[8988]: I1203 13:58:04.889072 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.889306 master-0 kubenswrapper[8988]: I1203 13:58:04.889141 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.889306 master-0 kubenswrapper[8988]: I1203 13:58:04.889170 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.889521 master-0 kubenswrapper[8988]: I1203 13:58:04.889482 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.990996 master-0 kubenswrapper[8988]: I1203 13:58:04.990833 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.990996 master-0 kubenswrapper[8988]: I1203 13:58:04.990919 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.990996 master-0 kubenswrapper[8988]: I1203 13:58:04.990951 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.990996 master-0 kubenswrapper[8988]: I1203 13:58:04.990974 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.991404 master-0 kubenswrapper[8988]: I1203 13:58:04.991110 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.991954 master-0 kubenswrapper[8988]: I1203 13:58:04.991918 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:04.995339 master-0 kubenswrapper[8988]: I1203 13:58:04.995290 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:05.019551 master-0 kubenswrapper[8988]: I1203 13:58:05.019448 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:05.198155 master-0 kubenswrapper[8988]: I1203 13:58:05.198053 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:05.364567 master-0 kubenswrapper[8988]: I1203 13:58:05.364504 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:58:05.371280 master-0 kubenswrapper[8988]: I1203 13:58:05.371214 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847682 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config\") pod \"728bc191-0639-49c8-a939-a81759bec820\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847737 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert\") pod \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847823 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvbr\" (UniqueName: \"kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr\") pod \"728bc191-0639-49c8-a939-a81759bec820\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847850 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config\") pod \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847876 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles\") pod \"728bc191-0639-49c8-a939-a81759bec820\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847895 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca\") pod \"728bc191-0639-49c8-a939-a81759bec820\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847910 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqrv\" (UniqueName: \"kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv\") pod \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847949 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert\") pod \"728bc191-0639-49c8-a939-a81759bec820\" (UID: \"728bc191-0639-49c8-a939-a81759bec820\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.847966 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca\") pod \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\" (UID: \"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6\") " Dec 03 13:58:05.850426 master-0 kubenswrapper[8988]: I1203 13:58:05.849349 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" (UID: "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:05.851212 master-0 kubenswrapper[8988]: I1203 13:58:05.850503 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config" (OuterVolumeSpecName: "config") pod "728bc191-0639-49c8-a939-a81759bec820" (UID: "728bc191-0639-49c8-a939-a81759bec820"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:05.851212 master-0 kubenswrapper[8988]: I1203 13:58:05.851180 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca" (OuterVolumeSpecName: "client-ca") pod "728bc191-0639-49c8-a939-a81759bec820" (UID: "728bc191-0639-49c8-a939-a81759bec820"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:05.852176 master-0 kubenswrapper[8988]: I1203 13:58:05.852116 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "728bc191-0639-49c8-a939-a81759bec820" (UID: "728bc191-0639-49c8-a939-a81759bec820"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:05.852473 master-0 kubenswrapper[8988]: I1203 13:58:05.852380 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config" (OuterVolumeSpecName: "config") pod "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" (UID: "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:05.855744 master-0 kubenswrapper[8988]: I1203 13:58:05.855698 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "728bc191-0639-49c8-a939-a81759bec820" (UID: "728bc191-0639-49c8-a939-a81759bec820"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:58:05.860118 master-0 kubenswrapper[8988]: I1203 13:58:05.860038 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv" (OuterVolumeSpecName: "kube-api-access-vqqrv") pod "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" (UID: "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6"). InnerVolumeSpecName "kube-api-access-vqqrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:58:05.861654 master-0 kubenswrapper[8988]: I1203 13:58:05.861570 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" (UID: "1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:58:05.862168 master-0 kubenswrapper[8988]: I1203 13:58:05.862113 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr" (OuterVolumeSpecName: "kube-api-access-5fvbr") pod "728bc191-0639-49c8-a939-a81759bec820" (UID: "728bc191-0639-49c8-a939-a81759bec820"). InnerVolumeSpecName "kube-api-access-5fvbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:58:05.868409 master-0 kubenswrapper[8988]: I1203 13:58:05.868379 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" Dec 03 13:58:05.869307 master-0 kubenswrapper[8988]: I1203 13:58:05.868367 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq" event={"ID":"728bc191-0639-49c8-a939-a81759bec820","Type":"ContainerDied","Data":"02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad"} Dec 03 13:58:05.869425 master-0 kubenswrapper[8988]: I1203 13:58:05.869399 8988 scope.go:117] "RemoveContainer" containerID="2101f07d21b09bef3562a60720d029d2f6c54a9dc924b95901f0ed03cda4f409" Dec 03 13:58:05.886795 master-0 kubenswrapper[8988]: I1203 13:58:05.881009 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" event={"ID":"1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6","Type":"ContainerDied","Data":"214cfb90bd01e2a38cc00a74c6415843347ddcbd2c5b20e0758acbc9d4f19c58"} Dec 03 13:58:05.886795 master-0 kubenswrapper[8988]: I1203 13:58:05.881236 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951062 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951126 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951141 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fvbr\" (UniqueName: \"kubernetes.io/projected/728bc191-0639-49c8-a939-a81759bec820-kube-api-access-5fvbr\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951151 8988 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951160 8988 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951171 8988 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqrv\" (UniqueName: \"kubernetes.io/projected/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-kube-api-access-vqqrv\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.951149 master-0 kubenswrapper[8988]: I1203 13:58:05.951180 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/728bc191-0639-49c8-a939-a81759bec820-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.952060 master-0 kubenswrapper[8988]: I1203 13:58:05.951190 8988 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/728bc191-0639-49c8-a939-a81759bec820-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.952060 master-0 kubenswrapper[8988]: I1203 13:58:05.951198 8988 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:05.978011 master-0 kubenswrapper[8988]: I1203 13:58:05.977926 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 13:58:05.978379 master-0 kubenswrapper[8988]: E1203 13:58:05.978209 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" containerName="route-controller-manager" Dec 03 13:58:05.978379 master-0 kubenswrapper[8988]: I1203 13:58:05.978236 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" containerName="route-controller-manager" Dec 03 13:58:05.978379 master-0 kubenswrapper[8988]: E1203 13:58:05.978245 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="728bc191-0639-49c8-a939-a81759bec820" containerName="controller-manager" Dec 03 13:58:05.978379 master-0 kubenswrapper[8988]: I1203 13:58:05.978252 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="728bc191-0639-49c8-a939-a81759bec820" containerName="controller-manager" Dec 03 13:58:05.978529 master-0 kubenswrapper[8988]: I1203 13:58:05.978464 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="728bc191-0639-49c8-a939-a81759bec820" containerName="controller-manager" Dec 03 13:58:05.978529 master-0 kubenswrapper[8988]: I1203 13:58:05.978481 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" containerName="route-controller-manager" Dec 03 13:58:05.979183 master-0 kubenswrapper[8988]: I1203 13:58:05.979138 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 13:58:05.981578 master-0 kubenswrapper[8988]: I1203 13:58:05.981488 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:05.981647 master-0 kubenswrapper[8988]: I1203 13:58:05.981616 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:05.985153 master-0 kubenswrapper[8988]: I1203 13:58:05.984609 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:58:05.990315 master-0 kubenswrapper[8988]: I1203 13:58:05.985494 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:58:05.990315 master-0 kubenswrapper[8988]: I1203 13:58:05.986171 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 13:58:05.990315 master-0 kubenswrapper[8988]: I1203 13:58:05.986400 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:58:05.990315 master-0 kubenswrapper[8988]: I1203 13:58:05.986889 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:58:05.990315 master-0 kubenswrapper[8988]: I1203 13:58:05.988112 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:58:05.990555 master-0 kubenswrapper[8988]: I1203 13:58:05.990521 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 13:58:05.994763 master-0 kubenswrapper[8988]: I1203 13:58:05.994230 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 13:58:05.997749 master-0 kubenswrapper[8988]: I1203 13:58:05.997697 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:58:06.060207 master-0 kubenswrapper[8988]: I1203 13:58:06.060119 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:58:06.069523 master-0 kubenswrapper[8988]: I1203 13:58:06.062519 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fb5f97c4d-bcdbq"] Dec 03 13:58:06.084738 master-0 kubenswrapper[8988]: I1203 13:58:06.084686 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:58:06.086569 master-0 kubenswrapper[8988]: I1203 13:58:06.086505 8988 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f6f54d5f6-ch42s"] Dec 03 13:58:06.162502 master-0 kubenswrapper[8988]: I1203 13:58:06.162326 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.162502 master-0 kubenswrapper[8988]: I1203 13:58:06.162485 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.162839 master-0 kubenswrapper[8988]: I1203 13:58:06.162540 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.162839 master-0 kubenswrapper[8988]: I1203 13:58:06.162584 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.162839 master-0 kubenswrapper[8988]: I1203 13:58:06.162800 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.162983 master-0 kubenswrapper[8988]: I1203 13:58:06.162874 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.162983 master-0 kubenswrapper[8988]: I1203 13:58:06.162911 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.163074 master-0 kubenswrapper[8988]: I1203 13:58:06.162982 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.264194 master-0 kubenswrapper[8988]: I1203 13:58:06.264086 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.264194 master-0 kubenswrapper[8988]: I1203 13:58:06.264181 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.264194 master-0 kubenswrapper[8988]: I1203 13:58:06.264219 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.264624 master-0 kubenswrapper[8988]: I1203 13:58:06.264251 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.264624 master-0 kubenswrapper[8988]: I1203 13:58:06.264328 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.264624 master-0 kubenswrapper[8988]: I1203 13:58:06.264358 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.264624 master-0 kubenswrapper[8988]: I1203 13:58:06.264393 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.264624 master-0 kubenswrapper[8988]: I1203 13:58:06.264453 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.265315 master-0 kubenswrapper[8988]: I1203 13:58:06.265229 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.265765 master-0 kubenswrapper[8988]: I1203 13:58:06.265683 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.266060 master-0 kubenswrapper[8988]: I1203 13:58:06.265847 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.266192 master-0 kubenswrapper[8988]: I1203 13:58:06.266148 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.266575 master-0 kubenswrapper[8988]: I1203 13:58:06.266517 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.288902 master-0 kubenswrapper[8988]: I1203 13:58:06.288832 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.290303 master-0 kubenswrapper[8988]: I1203 13:58:06.290269 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:06.290522 master-0 kubenswrapper[8988]: I1203 13:58:06.290444 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.393332 master-0 kubenswrapper[8988]: I1203 13:58:06.393220 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:06.414442 master-0 kubenswrapper[8988]: I1203 13:58:06.414110 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:07.030893 master-0 kubenswrapper[8988]: I1203 13:58:07.030815 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6" path="/var/lib/kubelet/pods/1dbfc04c-1ffb-4594-bfd1-5dfaa1d6dea6/volumes" Dec 03 13:58:07.031570 master-0 kubenswrapper[8988]: I1203 13:58:07.031547 8988 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="728bc191-0639-49c8-a939-a81759bec820" path="/var/lib/kubelet/pods/728bc191-0639-49c8-a939-a81759bec820/volumes" Dec 03 13:58:08.403928 master-0 kubenswrapper[8988]: I1203 13:58:08.403833 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 13:58:08.404925 master-0 kubenswrapper[8988]: I1203 13:58:08.404901 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.412586 master-0 kubenswrapper[8988]: I1203 13:58:08.409234 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 13:58:08.413401 master-0 kubenswrapper[8988]: I1203 13:58:08.412949 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:58:08.413401 master-0 kubenswrapper[8988]: I1203 13:58:08.413113 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:58:08.413401 master-0 kubenswrapper[8988]: I1203 13:58:08.413175 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:58:08.413654 master-0 kubenswrapper[8988]: I1203 13:58:08.413446 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:58:08.413923 master-0 kubenswrapper[8988]: I1203 13:58:08.413854 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:58:08.427112 master-0 kubenswrapper[8988]: I1203 13:58:08.427061 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 13:58:08.597927 master-0 kubenswrapper[8988]: I1203 13:58:08.597853 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.597927 master-0 kubenswrapper[8988]: I1203 13:58:08.597941 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.604358 master-0 kubenswrapper[8988]: I1203 13:58:08.604302 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.604520 master-0 kubenswrapper[8988]: I1203 13:58:08.604370 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.621846 master-0 kubenswrapper[8988]: I1203 13:58:08.621755 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:58:08.705815 master-0 kubenswrapper[8988]: I1203 13:58:08.705645 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.705815 master-0 kubenswrapper[8988]: I1203 13:58:08.705732 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.705815 master-0 kubenswrapper[8988]: I1203 13:58:08.705818 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.706171 master-0 kubenswrapper[8988]: I1203 13:58:08.705873 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.707578 master-0 kubenswrapper[8988]: I1203 13:58:08.707537 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.707877 master-0 kubenswrapper[8988]: I1203 13:58:08.707833 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.725581 master-0 kubenswrapper[8988]: I1203 13:58:08.725530 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:08.778398 master-0 kubenswrapper[8988]: I1203 13:58:08.778326 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:09.036375 master-0 kubenswrapper[8988]: I1203 13:58:09.036185 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:10.683424 master-0 kubenswrapper[8988]: I1203 13:58:10.683337 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7fwtv"] Dec 03 13:58:10.685070 master-0 kubenswrapper[8988]: I1203 13:58:10.685030 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.832967 master-0 kubenswrapper[8988]: I1203 13:58:10.832838 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.833301 master-0 kubenswrapper[8988]: I1203 13:58:10.833012 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.833301 master-0 kubenswrapper[8988]: I1203 13:58:10.833145 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.934802 master-0 kubenswrapper[8988]: I1203 13:58:10.934576 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.934802 master-0 kubenswrapper[8988]: I1203 13:58:10.934715 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.934802 master-0 kubenswrapper[8988]: I1203 13:58:10.934753 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.935347 master-0 kubenswrapper[8988]: I1203 13:58:10.935250 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:10.935462 master-0 kubenswrapper[8988]: I1203 13:58:10.935411 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:13.527747 master-0 kubenswrapper[8988]: I1203 13:58:13.527674 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fwtv"] Dec 03 13:58:13.550981 master-0 kubenswrapper[8988]: I1203 13:58:13.550137 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:13.708137 master-0 kubenswrapper[8988]: I1203 13:58:13.708028 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:20.805448 master-0 kubenswrapper[8988]: I1203 13:58:20.805115 8988 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz"] Dec 03 13:58:20.806210 master-0 kubenswrapper[8988]: I1203 13:58:20.806168 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:20.809791 master-0 kubenswrapper[8988]: I1203 13:58:20.809713 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 13:58:20.809981 master-0 kubenswrapper[8988]: I1203 13:58:20.809793 8988 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 13:58:20.810301 master-0 kubenswrapper[8988]: I1203 13:58:20.810248 8988 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 13:58:20.825036 master-0 kubenswrapper[8988]: I1203 13:58:20.824975 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz"] Dec 03 13:58:20.980724 master-0 kubenswrapper[8988]: I1203 13:58:20.980648 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:20.980724 master-0 kubenswrapper[8988]: I1203 13:58:20.980728 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:20.981075 master-0 kubenswrapper[8988]: I1203 13:58:20.980800 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.082416 master-0 kubenswrapper[8988]: I1203 13:58:21.082214 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.082416 master-0 kubenswrapper[8988]: I1203 13:58:21.082309 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.082416 master-0 kubenswrapper[8988]: I1203 13:58:21.082363 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.084297 master-0 kubenswrapper[8988]: I1203 13:58:21.082949 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.095057 master-0 kubenswrapper[8988]: I1203 13:58:21.094284 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.151183 master-0 kubenswrapper[8988]: I1203 13:58:21.150808 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:21.433064 master-0 kubenswrapper[8988]: I1203 13:58:21.432861 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:25.335884 master-0 kubenswrapper[8988]: I1203 13:58:25.335786 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t8rt7"] Dec 03 13:58:27.683279 master-0 kubenswrapper[8988]: I1203 13:58:27.683165 8988 scope.go:117] "RemoveContainer" containerID="55445c84ee27dfac14466bc7e8118d367fd0229276697cf9717729560bc34702" Dec 03 13:58:27.827296 master-0 kubenswrapper[8988]: W1203 13:58:27.826059 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799e819f_f4b2_4ac9_8fa4_7d4da7a79285.slice/crio-09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23 WatchSource:0}: Error finding container 09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23: Status 404 returned error can't find the container with id 09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23 Dec 03 13:58:28.023101 master-0 kubenswrapper[8988]: I1203 13:58:28.023034 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23"} Dec 03 13:58:28.027206 master-0 kubenswrapper[8988]: I1203 13:58:28.027123 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"4568b0000197ea509dbc549f285c717622711f0c697e5e0a5502e9e4faaedd8e"} Dec 03 13:58:28.186791 master-0 kubenswrapper[8988]: I1203 13:58:28.186705 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Dec 03 13:58:28.319725 master-0 kubenswrapper[8988]: I1203 13:58:28.319640 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 13:58:28.340383 master-0 kubenswrapper[8988]: I1203 13:58:28.338080 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 13:58:28.348996 master-0 kubenswrapper[8988]: W1203 13:58:28.348880 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod486d4964_18cc_4adc_b82d_b09627cadda4.slice/crio-9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3 WatchSource:0}: Error finding container 9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3: Status 404 returned error can't find the container with id 9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3 Dec 03 13:58:28.352340 master-0 kubenswrapper[8988]: W1203 13:58:28.349980 8988 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecc68b17_9112_471d_89f9_15bf30dfa004.slice/crio-53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82 WatchSource:0}: Error finding container 53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82: Status 404 returned error can't find the container with id 53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82 Dec 03 13:58:28.455584 master-0 kubenswrapper[8988]: I1203 13:58:28.455516 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7fwtv"] Dec 03 13:58:28.470429 master-0 kubenswrapper[8988]: I1203 13:58:28.468973 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 13:58:28.483453 master-0 kubenswrapper[8988]: I1203 13:58:28.483401 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 13:58:28.502245 master-0 kubenswrapper[8988]: I1203 13:58:28.497855 8988 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz"] Dec 03 13:58:28.629297 master-0 kubenswrapper[8988]: I1203 13:58:28.626419 8988 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 13:58:28.629297 master-0 kubenswrapper[8988]: I1203 13:58:28.627622 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630131 8988 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630226 8988 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: E1203 13:58:28.630647 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630663 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: E1203 13:58:28.630677 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630685 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: E1203 13:58:28.630711 8988 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630720 8988 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630859 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630883 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 13:58:28.630994 master-0 kubenswrapper[8988]: I1203 13:58:28.630899 8988 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 13:58:28.636305 master-0 kubenswrapper[8988]: I1203 13:58:28.632955 8988 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.636305 master-0 kubenswrapper[8988]: I1203 13:58:28.633971 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" containerID="cri-o://d559032002ae450f2dcc5a6551686ae528fbdc12019934f45dbbd1835ac0a064" gracePeriod=15 Dec 03 13:58:28.636305 master-0 kubenswrapper[8988]: I1203 13:58:28.634215 8988 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1" gracePeriod=15 Dec 03 13:58:28.663085 master-0 kubenswrapper[8988]: I1203 13:58:28.662967 8988 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:58:28.773337 master-0 kubenswrapper[8988]: I1203 13:58:28.773257 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773362 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773394 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773443 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773473 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773547 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773590 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.773840 master-0 kubenswrapper[8988]: I1203 13:58:28.773630 8988 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.875860 master-0 kubenswrapper[8988]: I1203 13:58:28.875762 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.875860 master-0 kubenswrapper[8988]: I1203 13:58:28.875851 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.875860 master-0 kubenswrapper[8988]: I1203 13:58:28.875881 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.875911 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.875935 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.875953 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.875975 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876056 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.875991 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876080 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876169 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876171 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876207 8988 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876273 master-0 kubenswrapper[8988]: I1203 13:58:28.876109 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876865 master-0 kubenswrapper[8988]: I1203 13:58:28.876321 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:28.876865 master-0 kubenswrapper[8988]: I1203 13:58:28.876353 8988 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:29.030127 master-0 kubenswrapper[8988]: I1203 13:58:29.029905 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.032987 master-0 kubenswrapper[8988]: I1203 13:58:29.032897 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.034400 master-0 kubenswrapper[8988]: I1203 13:58:29.034349 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.038516 master-0 kubenswrapper[8988]: I1203 13:58:29.038430 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.042222 master-0 kubenswrapper[8988]: I1203 13:58:29.042113 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.043247 master-0 kubenswrapper[8988]: I1203 13:58:29.043168 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.044156 master-0 kubenswrapper[8988]: I1203 13:58:29.044088 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.044780 master-0 kubenswrapper[8988]: I1203 13:58:29.044737 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.045276 master-0 kubenswrapper[8988]: I1203 13:58:29.045226 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.047765 master-0 kubenswrapper[8988]: I1203 13:58:29.047742 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-78ddcf56f9-8l84w_63aae3b9-9a72-497e-af01-5d8b8d0ac876/multus-admission-controller/0.log" Dec 03 13:58:29.048034 master-0 kubenswrapper[8988]: I1203 13:58:29.048011 8988 generic.go:334] "Generic (PLEG): container finished" podID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerID="bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881" exitCode=137 Dec 03 13:58:29.048197 master-0 kubenswrapper[8988]: I1203 13:58:29.048109 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerDied","Data":"bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881"} Dec 03 13:58:29.048283 master-0 kubenswrapper[8988]: I1203 13:58:29.048231 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" event={"ID":"63aae3b9-9a72-497e-af01-5d8b8d0ac876","Type":"ContainerDied","Data":"d923e2294dc5bd349ef1897a915245d9a43be1c9d681ac05585e4028bf44c392"} Dec 03 13:58:29.048361 master-0 kubenswrapper[8988]: I1203 13:58:29.048274 8988 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d923e2294dc5bd349ef1897a915245d9a43be1c9d681ac05585e4028bf44c392" Dec 03 13:58:29.050487 master-0 kubenswrapper[8988]: I1203 13:58:29.050416 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"c5498229c064870000ea3daf72432927db1bd1e50fb18b1e394aaea41976762e"} Dec 03 13:58:29.052182 master-0 kubenswrapper[8988]: I1203 13:58:29.052136 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5dcaccc5-46b1-4a38-b3af-6839dec529d3","Type":"ContainerStarted","Data":"8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d"} Dec 03 13:58:29.052425 master-0 kubenswrapper[8988]: I1203 13:58:29.052133 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.052844 master-0 kubenswrapper[8988]: I1203 13:58:29.052807 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.053692 master-0 kubenswrapper[8988]: I1203 13:58:29.053656 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.054245 master-0 kubenswrapper[8988]: I1203 13:58:29.054161 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rjqz" event={"ID":"03494fce-881e-4eb6-bc3d-570f1d8e7c52","Type":"ContainerStarted","Data":"99aab5d6addd41c622154cc6f270a6df7b17355eeaee15a1257331779d37b167"} Dec 03 13:58:29.054485 master-0 kubenswrapper[8988]: I1203 13:58:29.054421 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.055352 master-0 kubenswrapper[8988]: I1203 13:58:29.055307 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.056124 master-0 kubenswrapper[8988]: I1203 13:58:29.056064 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.056834 master-0 kubenswrapper[8988]: I1203 13:58:29.056793 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.057075 master-0 kubenswrapper[8988]: I1203 13:58:29.057044 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"f1da71cf736ec67987c67a2c683d673658afbbde7b7d3088a88079d70f7698eb"} Dec 03 13:58:29.057733 master-0 kubenswrapper[8988]: I1203 13:58:29.057656 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.058436 master-0 kubenswrapper[8988]: I1203 13:58:29.058399 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.059458 master-0 kubenswrapper[8988]: I1203 13:58:29.059406 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.060481 master-0 kubenswrapper[8988]: I1203 13:58:29.060351 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.060716 master-0 kubenswrapper[8988]: I1203 13:58:29.060680 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"0b5ae198d8458a64b4fc5c2a2dd3e600bd9b382a477dec6dc5d365c36f83700c"} Dec 03 13:58:29.060801 master-0 kubenswrapper[8988]: I1203 13:58:29.060733 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"f040ef6a7a2c71d1bb88a8e6c44278311cf99bd34b4ad6f1e2a093046f77970f"} Dec 03 13:58:29.061436 master-0 kubenswrapper[8988]: I1203 13:58:29.061390 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.062084 master-0 kubenswrapper[8988]: I1203 13:58:29.061999 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.063148 master-0 kubenswrapper[8988]: I1203 13:58:29.063100 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"ff1b2a1ff9154238692ebb6e0ae688f400ae8b743c546d838dde5d5bc888fe8a"} Dec 03 13:58:29.063440 master-0 kubenswrapper[8988]: I1203 13:58:29.063361 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.064224 master-0 kubenswrapper[8988]: I1203 13:58:29.064173 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.065048 master-0 kubenswrapper[8988]: I1203 13:58:29.065007 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.065981 master-0 kubenswrapper[8988]: I1203 13:58:29.065941 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"340d747ec16780778d80dba371a56e575bd9f6634b60bc266323b1291eb8cdba"} Dec 03 13:58:29.066421 master-0 kubenswrapper[8988]: I1203 13:58:29.066403 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"2d9fa9ab9f7e699411978500abac33a5ab419e6ce3c4e1ef13a7973cd07019af"} Dec 03 13:58:29.066575 master-0 kubenswrapper[8988]: I1203 13:58:29.066079 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.067799 master-0 kubenswrapper[8988]: I1203 13:58:29.067711 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.068172 master-0 kubenswrapper[8988]: I1203 13:58:29.068141 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"5e1a3335e1e7a01c650176f41fdc79b673ea09bcb04bb8a4c229686c62ac84e7"} Dec 03 13:58:29.068939 master-0 kubenswrapper[8988]: I1203 13:58:29.068862 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.069961 master-0 kubenswrapper[8988]: I1203 13:58:29.069900 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.070560 master-0 kubenswrapper[8988]: I1203 13:58:29.070518 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" event={"ID":"ecc68b17-9112-471d-89f9-15bf30dfa004","Type":"ContainerStarted","Data":"53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82"} Dec 03 13:58:29.070835 master-0 kubenswrapper[8988]: I1203 13:58:29.070775 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.071915 master-0 kubenswrapper[8988]: I1203 13:58:29.071869 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.072803 master-0 kubenswrapper[8988]: I1203 13:58:29.072753 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.072890 master-0 kubenswrapper[8988]: I1203 13:58:29.072850 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/0.log" Dec 03 13:58:29.072972 master-0 kubenswrapper[8988]: I1203 13:58:29.072930 8988 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="ecdb30fdbb4d4e7e6a5ab2a8c0c78dc966b6766d4fc8dacd3b90e5acf0728097" exitCode=1 Dec 03 13:58:29.073104 master-0 kubenswrapper[8988]: I1203 13:58:29.073018 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"ecdb30fdbb4d4e7e6a5ab2a8c0c78dc966b6766d4fc8dacd3b90e5acf0728097"} Dec 03 13:58:29.073721 master-0 kubenswrapper[8988]: I1203 13:58:29.073664 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.074585 master-0 kubenswrapper[8988]: I1203 13:58:29.074553 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.075514 master-0 kubenswrapper[8988]: I1203 13:58:29.075481 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.076234 master-0 kubenswrapper[8988]: I1203 13:58:29.076201 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.077155 master-0 kubenswrapper[8988]: I1203 13:58:29.077126 8988 generic.go:334] "Generic (PLEG): container finished" podID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" containerID="efe5b98b8193b6c315bd2fdafc1dfa799f114179992474177c6e7d697c70abb2" exitCode=0 Dec 03 13:58:29.077236 master-0 kubenswrapper[8988]: I1203 13:58:29.077147 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.077297 master-0 kubenswrapper[8988]: I1203 13:58:29.077241 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"50f28c77-b15c-4b86-93c8-221c0cc82bb2","Type":"ContainerDied","Data":"efe5b98b8193b6c315bd2fdafc1dfa799f114179992474177c6e7d697c70abb2"} Dec 03 13:58:29.077983 master-0 kubenswrapper[8988]: I1203 13:58:29.077944 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.078689 master-0 kubenswrapper[8988]: I1203 13:58:29.078655 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.079109 master-0 kubenswrapper[8988]: I1203 13:58:29.079056 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-582c5" event={"ID":"1efcc24c-87bf-48cd-83b5-196c661a2517","Type":"ContainerStarted","Data":"baf8480d9e2390e6727c0d4fc8ed3cdbe4111310f815a1aee6d6f586fad1452c"} Dec 03 13:58:29.079440 master-0 kubenswrapper[8988]: I1203 13:58:29.079407 8988 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.080250 master-0 kubenswrapper[8988]: I1203 13:58:29.080209 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"78dfd31a88f925b32bf9c0b8856a8693ab7bf23f18e8289b9863420889031b28"} Dec 03 13:58:29.080489 master-0 kubenswrapper[8988]: I1203 13:58:29.080453 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.081121 master-0 kubenswrapper[8988]: I1203 13:58:29.081081 8988 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.081779 master-0 kubenswrapper[8988]: I1203 13:58:29.081732 8988 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="e05f3c8b427af65164aa63c5861d13e4cd4cc04110fb6fdb74286266751163bc" exitCode=0 Dec 03 13:58:29.081860 master-0 kubenswrapper[8988]: I1203 13:58:29.081773 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"e05f3c8b427af65164aa63c5861d13e4cd4cc04110fb6fdb74286266751163bc"} Dec 03 13:58:29.081917 master-0 kubenswrapper[8988]: I1203 13:58:29.081852 8988 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.082883 master-0 kubenswrapper[8988]: I1203 13:58:29.082792 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.083803 master-0 kubenswrapper[8988]: I1203 13:58:29.083749 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.084556 master-0 kubenswrapper[8988]: I1203 13:58:29.084490 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.084850 master-0 kubenswrapper[8988]: I1203 13:58:29.084812 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"f1973b4466f42fb61df6cd77cbfef702ec93663db7196dad05b5c33c247207c2"} Dec 03 13:58:29.085685 master-0 kubenswrapper[8988]: I1203 13:58:29.085257 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.086081 master-0 kubenswrapper[8988]: I1203 13:58:29.086050 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.087049 master-0 kubenswrapper[8988]: I1203 13:58:29.087003 8988 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.088020 master-0 kubenswrapper[8988]: I1203 13:58:29.087981 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerStarted","Data":"8cd048874efe4b30f5f42dd06b85dd1c97db84e3f9ffabe72fd07644d0447417"} Dec 03 13:58:29.088092 master-0 kubenswrapper[8988]: I1203 13:58:29.088045 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.088626 master-0 kubenswrapper[8988]: I1203 13:58:29.088592 8988 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.090041 master-0 kubenswrapper[8988]: I1203 13:58:29.089976 8988 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.090940 master-0 kubenswrapper[8988]: I1203 13:58:29.090902 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.091074 master-0 kubenswrapper[8988]: I1203 13:58:29.091023 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerStarted","Data":"8ef0a7e56fbe9931d72ba7b8b024332339ea3e21624a3cc8144f776dec699c05"} Dec 03 13:58:29.107297 master-0 kubenswrapper[8988]: I1203 13:58:29.092582 8988 status_manager.go:851] "Failed to get status for pod" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.107297 master-0 kubenswrapper[8988]: I1203 13:58:29.093652 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.124301 master-0 kubenswrapper[8988]: I1203 13:58:29.116054 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.124301 master-0 kubenswrapper[8988]: I1203 13:58:29.117586 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.124301 master-0 kubenswrapper[8988]: I1203 13:58:29.118399 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"c881085069f63bacb400a6fed6cb8da6b8a20dafb74617a181cac7ed05a9f546"} Dec 03 13:58:29.124301 master-0 kubenswrapper[8988]: I1203 13:58:29.120415 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.124301 master-0 kubenswrapper[8988]: I1203 13:58:29.121363 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.129301 master-0 kubenswrapper[8988]: I1203 13:58:29.125661 8988 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.129301 master-0 kubenswrapper[8988]: I1203 13:58:29.126167 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.138326 master-0 kubenswrapper[8988]: I1203 13:58:29.135680 8988 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.138326 master-0 kubenswrapper[8988]: I1203 13:58:29.137829 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"f1979e33e0a276b36af72ab9c543861a6a668fc1d0e667f02164e7d59d91af2f"} Dec 03 13:58:29.142297 master-0 kubenswrapper[8988]: I1203 13:58:29.140965 8988 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.146633 master-0 kubenswrapper[8988]: I1203 13:58:29.146544 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.157587 master-0 kubenswrapper[8988]: I1203 13:58:29.157480 8988 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.163034 master-0 kubenswrapper[8988]: I1203 13:58:29.162449 8988 status_manager.go:851] "Failed to get status for pod" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-5775bfbf6d-vtvbd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.166058 master-0 kubenswrapper[8988]: I1203 13:58:29.166003 8988 status_manager.go:851] "Failed to get status for pod" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.172581 master-0 kubenswrapper[8988]: I1203 13:58:29.172507 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.177935 master-0 kubenswrapper[8988]: I1203 13:58:29.177825 8988 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1" exitCode=0 Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.188597 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.189467 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.189888 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.190391 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.190810 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.194365 master-0 kubenswrapper[8988]: I1203 13:58:29.194019 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.195204 master-0 kubenswrapper[8988]: I1203 13:58:29.195129 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.195966 master-0 kubenswrapper[8988]: I1203 13:58:29.195926 8988 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.196485 master-0 kubenswrapper[8988]: I1203 13:58:29.196440 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.197084 master-0 kubenswrapper[8988]: I1203 13:58:29.197043 8988 status_manager.go:851] "Failed to get status for pod" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-66f4cc99d4-x278n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.197847 master-0 kubenswrapper[8988]: I1203 13:58:29.197778 8988 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.199150 master-0 kubenswrapper[8988]: I1203 13:58:29.199053 8988 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.199902 master-0 kubenswrapper[8988]: I1203 13:58:29.199848 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.201317 master-0 kubenswrapper[8988]: I1203 13:58:29.201006 8988 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.202553 master-0 kubenswrapper[8988]: I1203 13:58:29.202510 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" event={"ID":"f5f23b6d-8303-46d8-892e-8e2c01b567b5","Type":"ContainerStarted","Data":"80e95cd74710420c097c7cf837380f44e3fef76745b76b26d24bb3a848d0ba8d"} Dec 03 13:58:29.203190 master-0 kubenswrapper[8988]: I1203 13:58:29.203123 8988 status_manager.go:851] "Failed to get status for pod" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-5775bfbf6d-vtvbd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.203947 master-0 kubenswrapper[8988]: I1203 13:58:29.203885 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.204568 master-0 kubenswrapper[8988]: I1203 13:58:29.204534 8988 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtm6s" event={"ID":"486d4964-18cc-4adc-b82d-b09627cadda4","Type":"ContainerStarted","Data":"9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3"} Dec 03 13:58:29.204733 master-0 kubenswrapper[8988]: I1203 13:58:29.204697 8988 status_manager.go:851] "Failed to get status for pod" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.205508 master-0 kubenswrapper[8988]: I1203 13:58:29.205459 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.206008 master-0 kubenswrapper[8988]: I1203 13:58:29.205960 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.206672 master-0 kubenswrapper[8988]: I1203 13:58:29.206614 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.207199 master-0 kubenswrapper[8988]: I1203 13:58:29.207163 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.208056 master-0 kubenswrapper[8988]: I1203 13:58:29.208013 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.208696 master-0 kubenswrapper[8988]: I1203 13:58:29.208651 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.209530 master-0 kubenswrapper[8988]: I1203 13:58:29.209485 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.210102 master-0 kubenswrapper[8988]: I1203 13:58:29.210059 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.212761 master-0 kubenswrapper[8988]: I1203 13:58:29.212719 8988 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-78ddcf56f9-8l84w_63aae3b9-9a72-497e-af01-5d8b8d0ac876/multus-admission-controller/0.log" Dec 03 13:58:29.212897 master-0 kubenswrapper[8988]: I1203 13:58:29.212831 8988 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:58:29.212973 master-0 kubenswrapper[8988]: I1203 13:58:29.212719 8988 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.214177 master-0 kubenswrapper[8988]: I1203 13:58:29.214141 8988 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.214947 master-0 kubenswrapper[8988]: I1203 13:58:29.214914 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.215732 master-0 kubenswrapper[8988]: I1203 13:58:29.215699 8988 status_manager.go:851] "Failed to get status for pod" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" pod="openshift-kube-scheduler/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.217823 master-0 kubenswrapper[8988]: I1203 13:58:29.217782 8988 status_manager.go:851] "Failed to get status for pod" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-66f4cc99d4-x278n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.218502 master-0 kubenswrapper[8988]: I1203 13:58:29.218474 8988 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.219119 master-0 kubenswrapper[8988]: I1203 13:58:29.219091 8988 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.220421 master-0 kubenswrapper[8988]: I1203 13:58:29.220395 8988 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.220943 master-0 kubenswrapper[8988]: I1203 13:58:29.220826 8988 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.221814 master-0 kubenswrapper[8988]: I1203 13:58:29.221790 8988 status_manager.go:851] "Failed to get status for pod" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-78ddcf56f9-8l84w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.222348 master-0 kubenswrapper[8988]: I1203 13:58:29.222325 8988 status_manager.go:851] "Failed to get status for pod" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-5775bfbf6d-vtvbd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.222751 master-0 kubenswrapper[8988]: I1203 13:58:29.222730 8988 status_manager.go:851] "Failed to get status for pod" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-68c95b6cf5-fmdmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.223235 master-0 kubenswrapper[8988]: I1203 13:58:29.223199 8988 status_manager.go:851] "Failed to get status for pod" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.223735 master-0 kubenswrapper[8988]: I1203 13:58:29.223714 8988 status_manager.go:851] "Failed to get status for pod" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7d8fb964c9-v2h98\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.224195 master-0 kubenswrapper[8988]: I1203 13:58:29.224172 8988 status_manager.go:851] "Failed to get status for pod" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" pod="openshift-marketplace/redhat-operators-6rjqz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6rjqz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.224612 master-0 kubenswrapper[8988]: I1203 13:58:29.224590 8988 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.255831 master-0 kubenswrapper[8988]: I1203 13:58:29.255744 8988 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.256781 master-0 kubenswrapper[8988]: I1203 13:58:29.256716 8988 status_manager.go:851] "Failed to get status for pod" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6fcd4b8856-ztns6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.276426 master-0 kubenswrapper[8988]: I1203 13:58:29.276352 8988 status_manager.go:851] "Failed to get status for pod" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" pod="openshift-marketplace/community-operators-582c5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-582c5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.292824 master-0 kubenswrapper[8988]: I1203 13:58:29.289112 8988 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.292824 master-0 kubenswrapper[8988]: I1203 13:58:29.289728 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbcrq\" (UniqueName: \"kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq\") pod \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " Dec 03 13:58:29.292824 master-0 kubenswrapper[8988]: I1203 13:58:29.289878 8988 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs\") pod \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\" (UID: \"63aae3b9-9a72-497e-af01-5d8b8d0ac876\") " Dec 03 13:58:29.297825 master-0 kubenswrapper[8988]: I1203 13:58:29.297721 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63aae3b9-9a72-497e-af01-5d8b8d0ac876-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "63aae3b9-9a72-497e-af01-5d8b8d0ac876" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:58:29.298234 master-0 kubenswrapper[8988]: I1203 13:58:29.298113 8988 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63aae3b9-9a72-497e-af01-5d8b8d0ac876-kube-api-access-zbcrq" (OuterVolumeSpecName: "kube-api-access-zbcrq") pod "63aae3b9-9a72-497e-af01-5d8b8d0ac876" (UID: "63aae3b9-9a72-497e-af01-5d8b8d0ac876"). InnerVolumeSpecName "kube-api-access-zbcrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:58:29.310000 master-0 kubenswrapper[8988]: I1203 13:58:29.309941 8988 status_manager.go:851] "Failed to get status for pod" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" pod="openshift-marketplace/redhat-marketplace-mtm6s" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mtm6s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.329696 master-0 kubenswrapper[8988]: I1203 13:58:29.329576 8988 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 13:58:29.339332 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 13:58:29.380341 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 13:58:29.380737 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 13:58:29.391588 master-0 systemd[1]: kubelet.service: Consumed 30.825s CPU time. Dec 03 13:58:29.488894 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 13:58:29.641866 master-0 kubenswrapper[16176]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:58:29.641866 master-0 kubenswrapper[16176]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 13:58:29.641866 master-0 kubenswrapper[16176]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:58:29.641866 master-0 kubenswrapper[16176]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:58:29.641866 master-0 kubenswrapper[16176]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 13:58:29.642782 master-0 kubenswrapper[16176]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 13:58:29.642782 master-0 kubenswrapper[16176]: I1203 13:58:29.642037 16176 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644872 16176 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644899 16176 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644907 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644912 16176 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644918 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:58:29.644917 master-0 kubenswrapper[16176]: W1203 13:58:29.644925 16176 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644933 16176 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644939 16176 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644945 16176 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644950 16176 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644955 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644959 16176 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644963 16176 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644968 16176 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644973 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644979 16176 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644985 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644991 16176 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.644996 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645002 16176 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645006 16176 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645011 16176 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645015 16176 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645021 16176 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645025 16176 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:58:29.645210 master-0 kubenswrapper[16176]: W1203 13:58:29.645030 16176 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645034 16176 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645038 16176 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645043 16176 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645048 16176 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645052 16176 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645057 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645061 16176 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645067 16176 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645073 16176 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645079 16176 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645083 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645088 16176 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645092 16176 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645110 16176 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645114 16176 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645119 16176 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645124 16176 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645128 16176 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645133 16176 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:58:29.645982 master-0 kubenswrapper[16176]: W1203 13:58:29.645137 16176 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645142 16176 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645146 16176 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645151 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645155 16176 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645159 16176 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645183 16176 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645188 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645193 16176 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645197 16176 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645202 16176 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645208 16176 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645215 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645220 16176 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645226 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645231 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645236 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645241 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645246 16176 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645252 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:58:29.646775 master-0 kubenswrapper[16176]: W1203 13:58:29.645258 16176 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645280 16176 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645285 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645290 16176 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645294 16176 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645299 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: W1203 13:58:29.645307 16176 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645523 16176 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645539 16176 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645551 16176 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645559 16176 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645566 16176 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645571 16176 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645580 16176 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645587 16176 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645592 16176 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645597 16176 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645603 16176 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645609 16176 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645614 16176 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645620 16176 flags.go:64] FLAG: --cgroup-root="" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645626 16176 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645631 16176 flags.go:64] FLAG: --client-ca-file="" Dec 03 13:58:29.647486 master-0 kubenswrapper[16176]: I1203 13:58:29.645637 16176 flags.go:64] FLAG: --cloud-config="" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645642 16176 flags.go:64] FLAG: --cloud-provider="" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645647 16176 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645655 16176 flags.go:64] FLAG: --cluster-domain="" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645660 16176 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645665 16176 flags.go:64] FLAG: --config-dir="" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645670 16176 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645676 16176 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645683 16176 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645693 16176 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645699 16176 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645704 16176 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645710 16176 flags.go:64] FLAG: --contention-profiling="false" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645715 16176 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645720 16176 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645729 16176 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645734 16176 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645741 16176 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645746 16176 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645751 16176 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645757 16176 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645762 16176 flags.go:64] FLAG: --enable-server="true" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645767 16176 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645774 16176 flags.go:64] FLAG: --event-burst="100" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645779 16176 flags.go:64] FLAG: --event-qps="50" Dec 03 13:58:29.648285 master-0 kubenswrapper[16176]: I1203 13:58:29.645785 16176 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645790 16176 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645795 16176 flags.go:64] FLAG: --eviction-hard="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645802 16176 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645807 16176 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645812 16176 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645817 16176 flags.go:64] FLAG: --eviction-soft="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645823 16176 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645827 16176 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645833 16176 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645838 16176 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645843 16176 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645848 16176 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645853 16176 flags.go:64] FLAG: --feature-gates="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645860 16176 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645865 16176 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645874 16176 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645880 16176 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645885 16176 flags.go:64] FLAG: --healthz-port="10248" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645891 16176 flags.go:64] FLAG: --help="false" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645896 16176 flags.go:64] FLAG: --hostname-override="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645900 16176 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645909 16176 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645916 16176 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645921 16176 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 13:58:29.650465 master-0 kubenswrapper[16176]: I1203 13:58:29.645927 16176 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645932 16176 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645938 16176 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645943 16176 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645947 16176 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645953 16176 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645958 16176 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645963 16176 flags.go:64] FLAG: --kube-reserved="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645969 16176 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645974 16176 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645979 16176 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645985 16176 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645991 16176 flags.go:64] FLAG: --lock-file="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.645997 16176 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646003 16176 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646008 16176 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646017 16176 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646023 16176 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646028 16176 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646033 16176 flags.go:64] FLAG: --logging-format="text" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646038 16176 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646044 16176 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646050 16176 flags.go:64] FLAG: --manifest-url="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646057 16176 flags.go:64] FLAG: --manifest-url-header="" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646064 16176 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 13:58:29.651342 master-0 kubenswrapper[16176]: I1203 13:58:29.646069 16176 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646076 16176 flags.go:64] FLAG: --max-pods="110" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646081 16176 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646086 16176 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646094 16176 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646100 16176 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646106 16176 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646111 16176 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646116 16176 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646135 16176 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646141 16176 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646146 16176 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646152 16176 flags.go:64] FLAG: --pod-cidr="" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646157 16176 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646166 16176 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646171 16176 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646177 16176 flags.go:64] FLAG: --pods-per-core="0" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646183 16176 flags.go:64] FLAG: --port="10250" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646188 16176 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646194 16176 flags.go:64] FLAG: --provider-id="" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646199 16176 flags.go:64] FLAG: --qos-reserved="" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646205 16176 flags.go:64] FLAG: --read-only-port="10255" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646210 16176 flags.go:64] FLAG: --register-node="true" Dec 03 13:58:29.652157 master-0 kubenswrapper[16176]: I1203 13:58:29.646215 16176 flags.go:64] FLAG: --register-schedulable="true" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646221 16176 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646231 16176 flags.go:64] FLAG: --registry-burst="10" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646236 16176 flags.go:64] FLAG: --registry-qps="5" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646242 16176 flags.go:64] FLAG: --reserved-cpus="" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646247 16176 flags.go:64] FLAG: --reserved-memory="" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646254 16176 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646283 16176 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646289 16176 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646294 16176 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646299 16176 flags.go:64] FLAG: --runonce="false" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646305 16176 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646310 16176 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646316 16176 flags.go:64] FLAG: --seccomp-default="false" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646331 16176 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646342 16176 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646347 16176 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646357 16176 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646362 16176 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646368 16176 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646374 16176 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646379 16176 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646390 16176 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646396 16176 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646401 16176 flags.go:64] FLAG: --system-cgroups="" Dec 03 13:58:29.653053 master-0 kubenswrapper[16176]: I1203 13:58:29.646406 16176 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646414 16176 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646420 16176 flags.go:64] FLAG: --tls-cert-file="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646425 16176 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646431 16176 flags.go:64] FLAG: --tls-min-version="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646436 16176 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646442 16176 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646447 16176 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646509 16176 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646516 16176 flags.go:64] FLAG: --v="2" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646525 16176 flags.go:64] FLAG: --version="false" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646533 16176 flags.go:64] FLAG: --vmodule="" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646539 16176 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: I1203 13:58:29.646545 16176 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646673 16176 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646680 16176 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646685 16176 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646689 16176 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646694 16176 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646698 16176 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646703 16176 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646740 16176 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646814 16176 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:58:29.654299 master-0 kubenswrapper[16176]: W1203 13:58:29.646820 16176 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646825 16176 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646829 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646851 16176 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646856 16176 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646861 16176 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646868 16176 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646874 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646879 16176 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646883 16176 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646889 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646893 16176 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646898 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646902 16176 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646906 16176 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646911 16176 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646915 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646943 16176 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646948 16176 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:58:29.655403 master-0 kubenswrapper[16176]: W1203 13:58:29.646953 16176 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646957 16176 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646961 16176 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646965 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646973 16176 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646977 16176 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646981 16176 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646986 16176 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646990 16176 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646994 16176 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.646999 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647004 16176 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647008 16176 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647012 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647017 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647021 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647025 16176 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647030 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647034 16176 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647038 16176 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:58:29.656064 master-0 kubenswrapper[16176]: W1203 13:58:29.647044 16176 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647049 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647054 16176 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647060 16176 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647064 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647079 16176 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647084 16176 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647090 16176 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647095 16176 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647099 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647104 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647109 16176 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647113 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647119 16176 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647124 16176 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647128 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647134 16176 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647138 16176 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647143 16176 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:58:29.656787 master-0 kubenswrapper[16176]: W1203 13:58:29.647148 16176 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.647152 16176 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.647156 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.647163 16176 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.647167 16176 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: I1203 13:58:29.647194 16176 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: I1203 13:58:29.652300 16176 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: I1203 13:58:29.652368 16176 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652852 16176 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652896 16176 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652902 16176 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652915 16176 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652922 16176 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652927 16176 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652933 16176 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:58:29.657555 master-0 kubenswrapper[16176]: W1203 13:58:29.652937 16176 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652942 16176 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652946 16176 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652951 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652956 16176 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652961 16176 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652970 16176 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652979 16176 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652988 16176 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652994 16176 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.652998 16176 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653003 16176 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653009 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653014 16176 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653018 16176 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653023 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653029 16176 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653036 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653042 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:58:29.658186 master-0 kubenswrapper[16176]: W1203 13:58:29.653048 16176 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653059 16176 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653067 16176 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653072 16176 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653077 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653082 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653088 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653095 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653100 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653106 16176 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653114 16176 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653119 16176 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653123 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653128 16176 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653136 16176 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653141 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653145 16176 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653151 16176 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653156 16176 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653161 16176 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:58:29.658903 master-0 kubenswrapper[16176]: W1203 13:58:29.653166 16176 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653170 16176 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653175 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653180 16176 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653185 16176 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653191 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653195 16176 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653235 16176 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653242 16176 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653247 16176 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653251 16176 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653256 16176 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653300 16176 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653306 16176 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653311 16176 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653316 16176 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653321 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653326 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653337 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653346 16176 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:58:29.659700 master-0 kubenswrapper[16176]: W1203 13:58:29.653352 16176 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653357 16176 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653364 16176 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653371 16176 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653377 16176 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653382 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: I1203 13:58:29.653394 16176 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653684 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653693 16176 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653698 16176 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653703 16176 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653708 16176 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653713 16176 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653718 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 13:58:29.660388 master-0 kubenswrapper[16176]: W1203 13:58:29.653724 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653729 16176 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653793 16176 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653800 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653805 16176 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653809 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653813 16176 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653817 16176 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653821 16176 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653825 16176 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653830 16176 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653834 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653838 16176 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653844 16176 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653848 16176 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653852 16176 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653857 16176 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653861 16176 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653865 16176 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653869 16176 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 13:58:29.660868 master-0 kubenswrapper[16176]: W1203 13:58:29.653873 16176 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653877 16176 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653881 16176 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653886 16176 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653891 16176 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653897 16176 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653902 16176 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653906 16176 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653910 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653915 16176 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653920 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653923 16176 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653928 16176 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653932 16176 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653940 16176 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653945 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653950 16176 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653957 16176 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 13:58:29.661600 master-0 kubenswrapper[16176]: W1203 13:58:29.653962 16176 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653966 16176 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653970 16176 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653974 16176 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653978 16176 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653982 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653986 16176 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653991 16176 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653995 16176 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.653999 16176 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654004 16176 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654008 16176 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654014 16176 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654018 16176 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654022 16176 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654026 16176 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654030 16176 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654034 16176 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654038 16176 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654043 16176 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 13:58:29.662148 master-0 kubenswrapper[16176]: W1203 13:58:29.654047 16176 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654051 16176 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654055 16176 feature_gate.go:330] unrecognized feature gate: Example Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654059 16176 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654063 16176 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654069 16176 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: W1203 13:58:29.654074 16176 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.654081 16176 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.655690 16176 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.657735 16176 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.657905 16176 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.658216 16176 server.go:997] "Starting client certificate rotation" Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.658248 16176 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 13:58:29.662825 master-0 kubenswrapper[16176]: I1203 13:58:29.658405 16176 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 08:21:10.21275872 +0000 UTC Dec 03 13:58:29.663424 master-0 kubenswrapper[16176]: I1203 13:58:29.658834 16176 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h22m40.553948114s for next certificate rotation Dec 03 13:58:29.663424 master-0 kubenswrapper[16176]: I1203 13:58:29.659380 16176 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:58:29.663424 master-0 kubenswrapper[16176]: I1203 13:58:29.660973 16176 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 13:58:29.664834 master-0 kubenswrapper[16176]: I1203 13:58:29.664793 16176 log.go:25] "Validated CRI v1 runtime API" Dec 03 13:58:29.671640 master-0 kubenswrapper[16176]: I1203 13:58:29.671578 16176 log.go:25] "Validated CRI v1 image API" Dec 03 13:58:29.673052 master-0 kubenswrapper[16176]: I1203 13:58:29.673002 16176 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 13:58:29.682234 master-0 kubenswrapper[16176]: I1203 13:58:29.682160 16176 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 13:58:29.682960 master-0 kubenswrapper[16176]: I1203 13:58:29.682213 16176 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06a6bfff8d933d9670b8e8e8de6cfda51fcb359ae53ec1c55a93a9738f4fc201/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06a6bfff8d933d9670b8e8e8de6cfda51fcb359ae53ec1c55a93a9738f4fc201/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23/userdata/shm major:0 minor:837 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm major:0 minor:332 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2d9b8c691d5f3ee7b94a063c9932a9e9584dbd2cc766bb12c9c9139903e78355/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2d9b8c691d5f3ee7b94a063c9932a9e9584dbd2cc766bb12c9c9139903e78355/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/317e7dd62fa100db1d45ff57aba484e787374c6332b21d016a43057d248fc561/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/317e7dd62fa100db1d45ff57aba484e787374c6332b21d016a43057d248fc561/userdata/shm major:0 minor:991 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db/userdata/shm major:0 minor:432 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/44ddc337512cf47184ee9f63cffcf4b3f72f69c2c567abe7ddd38b25975bdf7c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/44ddc337512cf47184ee9f63cffcf4b3f72f69c2c567abe7ddd38b25975bdf7c/userdata/shm major:0 minor:517 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4568b0000197ea509dbc549f285c717622711f0c697e5e0a5502e9e4faaedd8e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4568b0000197ea509dbc549f285c717622711f0c697e5e0a5502e9e4faaedd8e/userdata/shm major:0 minor:762 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm major:0 minor:122 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm major:0 minor:327 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm major:0 minor:320 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97/userdata/shm major:0 minor:979 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889/userdata/shm major:0 minor:524 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82/userdata/shm major:0 minor:890 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/588f3f48138c2b3392a4eae817dfb25b4a6dd6a9f3ecf65d5033e45b842a15ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/588f3f48138c2b3392a4eae817dfb25b4a6dd6a9f3ecf65d5033e45b842a15ed/userdata/shm major:0 minor:561 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2/userdata/shm major:0 minor:986 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/608e5faf1d1f7ffd467c7714def83c802d4d5d7a97b5dd1c6daac1ec34f49d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/608e5faf1d1f7ffd467c7714def83c802d4d5d7a97b5dd1c6daac1ec34f49d3a/userdata/shm major:0 minor:528 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/636d93d2bc5d6274a68744e7bb8286da893d7e599b6de981210f2789cc0fd2da/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/636d93d2bc5d6274a68744e7bb8286da893d7e599b6de981210f2789cc0fd2da/userdata/shm major:0 minor:741 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e/userdata/shm major:0 minor:1032 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d/userdata/shm major:0 minor:381 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78dfd31a88f925b32bf9c0b8856a8693ab7bf23f18e8289b9863420889031b28/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78dfd31a88f925b32bf9c0b8856a8693ab7bf23f18e8289b9863420889031b28/userdata/shm major:0 minor:900 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80e95cd74710420c097c7cf837380f44e3fef76745b76b26d24bb3a848d0ba8d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80e95cd74710420c097c7cf837380f44e3fef76745b76b26d24bb3a848d0ba8d/userdata/shm major:0 minor:836 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8154ea1badaee93911650d5b6c9a0d50ee5f865cc92efee68e3e567a26fac336/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8154ea1badaee93911650d5b6c9a0d50ee5f865cc92efee68e3e567a26fac336/userdata/shm major:0 minor:1029 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm major:0 minor:160 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm major:0 minor:336 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd/userdata/shm major:0 minor:500 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d/userdata/shm major:0 minor:811 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2/userdata/shm major:0 minor:549 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3/userdata/shm major:0 minor:813 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm major:0 minor:333 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/939b2c92bcdf7cd1a7639905546ba592e8fa9fac9978494aea2a13c1b29704e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/939b2c92bcdf7cd1a7639905546ba592e8fa9fac9978494aea2a13c1b29704e8/userdata/shm major:0 minor:520 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm major:0 minor:161 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99aab5d6addd41c622154cc6f270a6df7b17355eeaee15a1257331779d37b167/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99aab5d6addd41c622154cc6f270a6df7b17355eeaee15a1257331779d37b167/userdata/shm major:0 minor:889 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409/userdata/shm major:0 minor:914 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a4482da3b6269c37fd64ed4a723b3d1c0f7f294b123b00a40d321fec5fbfbd20/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a4482da3b6269c37fd64ed4a723b3d1c0f7f294b123b00a40d321fec5fbfbd20/userdata/shm major:0 minor:631 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b23778ca4e9ae4dbb3de59134916161ec83a634b903bdd6f9ff3c7980d2471f9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b23778ca4e9ae4dbb3de59134916161ec83a634b903bdd6f9ff3c7980d2471f9/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83/userdata/shm major:0 minor:995 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b/userdata/shm major:0 minor:997 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2/userdata/shm major:0 minor:589 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/baf8480d9e2390e6727c0d4fc8ed3cdbe4111310f815a1aee6d6f586fad1452c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/baf8480d9e2390e6727c0d4fc8ed3cdbe4111310f815a1aee6d6f586fad1452c/userdata/shm major:0 minor:812 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c0c28bf839b2e5e1bceddd001eae58acc3775691713261478c7742c6a0302aba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c0c28bf839b2e5e1bceddd001eae58acc3775691713261478c7742c6a0302aba/userdata/shm major:0 minor:1030 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393/userdata/shm major:0 minor:1025 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968/userdata/shm major:0 minor:515 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm major:0 minor:149 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f/userdata/shm major:0 minor:1026 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e/userdata/shm major:0 minor:400 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm major:0 minor:189 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e3ebbbd0ee7ca51929de3beceebd50f7b813cf02ddcf6e89d22ba8b987cb3d6e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e3ebbbd0ee7ca51929de3beceebd50f7b813cf02ddcf6e89d22ba8b987cb3d6e/userdata/shm major:0 minor:458 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555/userdata/shm major:0 minor:814 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e90ae89b899f7c4adaa8b2c1c88e7171c1cb37b6c4cab4e7e1756faa4c54abf5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e90ae89b899f7c4adaa8b2c1c88e7171c1cb37b6c4cab4e7e1756faa4c54abf5/userdata/shm major:0 minor:985 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm major:0 minor:345 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1e6e9b5cc1123cc229e5b5c55833cf8c55b534df02d94f2822bf88d34528957/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1e6e9b5cc1123cc229e5b5c55833cf8c55b534df02d94f2822bf88d34528957/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d/userdata/shm major:0 minor:513 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm major:0 minor:330 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352/userdata/shm major:0 minor:521 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff1b2a1ff9154238692ebb6e0ae688f400ae8b743c546d838dde5d5bc888fe8a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff1b2a1ff9154238692ebb6e0ae688f400ae8b743c546d838dde5d5bc888fe8a/userdata/shm major:0 minor:896 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03494fce-881e-4eb6-bc3d-570f1d8e7c52/volumes/kubernetes.io~projected/kube-api-access-6k2bw:{mountpoint:/var/lib/kubelet/pods/03494fce-881e-4eb6-bc3d-570f1d8e7c52/volumes/kubernetes.io~projected/kube-api-access-6k2bw major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:307 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv:{mountpoint:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert major:0 minor:294 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access major:0 minor:308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~projected/kube-api-access-pj4f8:{mountpoint:/var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~projected/kube-api-access-pj4f8 major:0 minor:809 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~secret/serving-cert major:0 minor:765 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:710 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp major:0 minor:709 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh major:0 minor:711 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27:{mountpoint:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27 major:0 minor:300 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1efcc24c-87bf-48cd-83b5-196c661a2517/volumes/kubernetes.io~projected/kube-api-access-whkbl:{mountpoint:/var/lib/kubelet/pods/1efcc24c-87bf-48cd-83b5-196c661a2517/volumes/kubernetes.io~projected/kube-api-access-whkbl major:0 minor:807 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf:{mountpoint:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf major:0 minor:399 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~secret/webhook-certs major:0 minor:398 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~projected/kube-api-access-m789m:{mountpoint:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~projected/kube-api-access-m789m major:0 minor:835 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/encryption-config major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/etcd-client major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/serving-cert major:0 minor:832 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~projected/kube-api-access-jzlgx:{mountpoint:/var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~projected/kube-api-access-jzlgx major:0 minor:431 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~secret/signing-key major:0 minor:430 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl:{mountpoint:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl major:0 minor:739 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj major:0 minor:623 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~secret/metrics-tls major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/486d4964-18cc-4adc-b82d-b09627cadda4/volumes/kubernetes.io~projected/kube-api-access-m4pd4:{mountpoint:/var/lib/kubelet/pods/486d4964-18cc-4adc-b82d-b09627cadda4/volumes/kubernetes.io~projected/kube-api-access-m4pd4 major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j major:0 minor:1020 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~secret/proxy-tls major:0 minor:1000 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50f28c77-b15c-4b86-93c8-221c0cc82bb2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/50f28c77-b15c-4b86-93c8-221c0cc82bb2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z major:0 minor:316 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client major:0 minor:292 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert major:0 minor:299 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87:{mountpoint:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87 major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access major:0 minor:314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:296 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dcaccc5-46b1-4a38-b3af-6839dec529d3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5dcaccc5-46b1-4a38-b3af-6839dec529d3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1121 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/volumes/kubernetes.io~projected/kube-api-access-wqkdr:{mountpoint:/var/lib/kubelet/pods/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/volumes/kubernetes.io~projected/kube-api-access-wqkdr major:0 minor:499 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~projected/kube-api-access-8wh8g:{mountpoint:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~projected/kube-api-access-8wh8g major:0 minor:1018 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cert major:0 minor:975 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:916 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:156 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/ca-certs major:0 minor:610 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/kube-api-access-t8knq:{mountpoint:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/kube-api-access-t8knq major:0 minor:612 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~projected/kube-api-access-2q8g8:{mountpoint:/var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~projected/kube-api-access-2q8g8 major:0 minor:911 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~projected/kube-api-access-nc9nj:{mountpoint:/var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~projected/kube-api-access-nc9nj major:0 minor:908 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:644 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d38d102-4efe-4ed3-ae23-b1e295cdaccd/volumes/kubernetes.io~projected/kube-api-access-v429m:{mountpoint:/var/lib/kubelet/pods/6d38d102-4efe-4ed3-ae23-b1e295cdaccd/volumes/kubernetes.io~projected/kube-api-access-v429m major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:344 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6f723d97-5c65-4ae7-9085-26db8b4f2f52/volumes/kubernetes.io~projected/kube-api-access-wwv7s:{mountpoint:/var/lib/kubelet/pods/6f723d97-5c65-4ae7-9085-26db8b4f2f52/volumes/kubernetes.io~projected/kube-api-access-wwv7s major:0 minor:514 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~projected/kube-api-access-ltsnd:{mountpoint:/var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~projected/kube-api-access-ltsnd major:0 minor:1019 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~secret/cert major:0 minor:915 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:110 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 major:0 minor:689 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls major:0 minor:688 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8:{mountpoint:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8 major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:291 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/ca-certs major:0 minor:563 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/kube-api-access-bwck4:{mountpoint:/var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/kube-api-access-bwck4 major:0 minor:548 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~projected/kube-api-access-dwmrj:{mountpoint:/var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~projected/kube-api-access-dwmrj major:0 minor:972 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:639 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb:{mountpoint:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert major:0 minor:295 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd:{mountpoint:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd major:0 minor:306 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~secret/metrics-tls major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a192c38a-4bfa-40fe-9a2d-d48260cf6443/volumes/kubernetes.io~projected/kube-api-access-fn7fm:{mountpoint:/var/lib/kubelet/pods/a192c38a-4bfa-40fe-9a2d-d48260cf6443/volumes/kubernetes.io~projected/kube-api-access-fn7fm major:0 minor:1122 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~projected/kube-api-access-9cnd5:{mountpoint:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~projected/kube-api-access-9cnd5 major:0 minor:1022 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:999 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/srv-cert major:0 minor:1001 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~projected/kube-api-access-cbzpz:{mountpoint:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~projected/kube-api-access-cbzpz major:0 minor:617 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/encryption-config major:0 minor:615 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/etcd-client major:0 minor:614 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/serving-cert major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~projected/kube-api-access-5mk6r:{mountpoint:/var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~projected/kube-api-access-5mk6r major:0 minor:980 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:976 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq:{mountpoint:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq major:0 minor:310 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access major:0 minor:304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert major:0 minor:297 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~projected/kube-api-access-92p99:{mountpoint:/var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~projected/kube-api-access-92p99 major:0 minor:974 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:638 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~secret/metrics-certs major:0 minor:586 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~projected/kube-api-access-lfdn2:{mountpoint:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~projected/kube-api-access-lfdn2 major:0 minor:1021 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/srv-cert major:0 minor:1007 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr major:0 minor:317 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~secret/metrics-tls major:0 minor:460 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bff18a80-0b0f-40ab-862e-e8b1ab32040a/volumes/kubernetes.io~projected/kube-api-access-zcqxx:{mountpoint:/var/lib/kubelet/pods/bff18a80-0b0f-40ab-862e-e8b1ab32040a/volumes/kubernetes.io~projected/kube-api-access-zcqxx major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8:{mountpoint:/var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8 major:0 minor:309 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~projected/kube-api-access-zhc87:{mountpoint:/var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~projected/kube-api-access-zhc87 major:0 minor:973 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~secret/serving-cert major:0 minor:646 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b Dec 03 13:58:29.683434 master-0 kubenswrapper[16176]: 3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:111 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:121 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~projected/kube-api-access-rjbsl:{mountpoint:/var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~projected/kube-api-access-rjbsl major:0 minor:984 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:977 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:587 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:806 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~projected/kube-api-access-jpttk:{mountpoint:/var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~projected/kube-api-access-jpttk major:0 minor:613 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~secret/serving-cert major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~projected/kube-api-access-jn5h6:{mountpoint:/var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~projected/kube-api-access-jn5h6 major:0 minor:909 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:645 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd:{mountpoint:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd major:0 minor:326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert major:0 minor:323 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~projected/kube-api-access-8xrdq:{mountpoint:/var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~projected/kube-api-access-8xrdq major:0 minor:379 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~secret/serving-cert major:0 minor:93 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659:{mountpoint:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659 major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:588 fsType:tmpfs blockSize:0} overlay_0-1002:{mountpoint:/var/lib/containers/storage/overlay/fcf8681e7a33f41ae5ac6f46cede5f689cbd7f0164a6d3865cf80d6825751f8f/merged major:0 minor:1002 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/b28094ecde33e074ba42b216bc25eca0518ff8bb544951653a2f56ad9f53ff0f/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/85a56d13a4d52fe50820526ef49a45a3383aa283b9e090af86d3ddbd01a3422e/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1035:{mountpoint:/var/lib/containers/storage/overlay/4635d35ab7639c700c84d12b74b46f441568e3d3dfd0b15814abac8dfbc698d9/merged major:0 minor:1035 fsType:overlay blockSize:0} overlay_0-1037:{mountpoint:/var/lib/containers/storage/overlay/741a5191924b2fff8c802038ca259b14564c8e4d0f49bb19930484be1c7d2d5b/merged major:0 minor:1037 fsType:overlay blockSize:0} overlay_0-1039:{mountpoint:/var/lib/containers/storage/overlay/46f2941c74c8db4f4c3d67b034fa0d45d71e72d9402477d131125b33e20cefb7/merged major:0 minor:1039 fsType:overlay blockSize:0} overlay_0-1041:{mountpoint:/var/lib/containers/storage/overlay/60c5c11a7ef1765a991894b00e3cd0d0168eea5f9c5bdf5f03442d3c5974d552/merged major:0 minor:1041 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/e13fa3c0fe7a95653a3cf12adb650fdc12b2d85566cce0fdec695781f3317743/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1056:{mountpoint:/var/lib/containers/storage/overlay/a8f3681e6dbec2a1051569bc2d8c5aba86d3790a6a94ca737ab2364766f2ecea/merged major:0 minor:1056 fsType:overlay blockSize:0} overlay_0-1058:{mountpoint:/var/lib/containers/storage/overlay/34bf1fb4429c645d8f425c5a7dbfaeadf8281f261341863c37e3fabb59f33dca/merged major:0 minor:1058 fsType:overlay blockSize:0} overlay_0-1067:{mountpoint:/var/lib/containers/storage/overlay/35dd7c023e977746ad05d97b8e2b389d23c83f29f8edb6175c12faaf8d0cce74/merged major:0 minor:1067 fsType:overlay blockSize:0} overlay_0-1069:{mountpoint:/var/lib/containers/storage/overlay/a236968548ecc750cdeec21afb2b9a62794778208f29819c00d749f628b78231/merged major:0 minor:1069 fsType:overlay blockSize:0} overlay_0-1071:{mountpoint:/var/lib/containers/storage/overlay/340f82623d5ea2a9bf23e7adb554257b3c327098c6c073e085f3636fbc57ed3d/merged major:0 minor:1071 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/67ba8656ebbe8062925d04996863a4ba55184f632ed6a8eecd93650ad71508a7/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1075:{mountpoint:/var/lib/containers/storage/overlay/dede523f675413b327cc6e59dbe60da5785770101034476e1636e614b9f4dab8/merged major:0 minor:1075 fsType:overlay blockSize:0} overlay_0-1077:{mountpoint:/var/lib/containers/storage/overlay/9d13cbd5df6067c1c854c12444e648811082e9b43e39e4373ac74a9d75d0b148/merged major:0 minor:1077 fsType:overlay blockSize:0} overlay_0-1079:{mountpoint:/var/lib/containers/storage/overlay/93fd6d582a81c3286ed3a7777f539b49abd37683a74d27d1dd10004bff9415f4/merged major:0 minor:1079 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/80f74c56541da7a312790a3264090770b0e957bb8a429e83ded7d3d56d0b04c0/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/772886a2b90e6c283c799096a09c896a9f5fb8c0a172914caa231310dd2817cc/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-1106:{mountpoint:/var/lib/containers/storage/overlay/b564bad69022e77062a8e8cf08a60c1ad0fadcc21f471e349493255baa235d29/merged major:0 minor:1106 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/03d950f4ff72d8d2dd012b88e2ae954f879ae21a92e47e62b61ea9cd76d17dd4/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-1127:{mountpoint:/var/lib/containers/storage/overlay/f51ecc9855d74b98ac8aef0582be62d189e2e2d624a983c035fadcee93a39589/merged major:0 minor:1127 fsType:overlay blockSize:0} overlay_0-1129:{mountpoint:/var/lib/containers/storage/overlay/f7096ad9b931be6212d95a4fea38942f37f8bb7cf982ee56df70aaeb75d81782/merged major:0 minor:1129 fsType:overlay blockSize:0} overlay_0-1131:{mountpoint:/var/lib/containers/storage/overlay/646e39346b23bc4b40628227a36732936087cebece3ba34dbba302a67f0b2e38/merged major:0 minor:1131 fsType:overlay blockSize:0} overlay_0-1133:{mountpoint:/var/lib/containers/storage/overlay/f227a761935628b517a3b12e4d6d4bc8025cacc8612ef40f9bbb489e2bf6d41b/merged major:0 minor:1133 fsType:overlay blockSize:0} overlay_0-1135:{mountpoint:/var/lib/containers/storage/overlay/4f255a0751604dac4e34ef66c38c363c0a22e95870dff2ad97fa0afaf8de8be5/merged major:0 minor:1135 fsType:overlay blockSize:0} overlay_0-1137:{mountpoint:/var/lib/containers/storage/overlay/5018eedc4b0d425b8e246a0cfc5869b3b251f7d04144325ab823f8a650d4b3f8/merged major:0 minor:1137 fsType:overlay blockSize:0} overlay_0-1139:{mountpoint:/var/lib/containers/storage/overlay/a181ce395447c2bd663f2bade0536dbcdff3c8da314383a5e8e5a5f1c833c79f/merged major:0 minor:1139 fsType:overlay blockSize:0} overlay_0-1144:{mountpoint:/var/lib/containers/storage/overlay/f7afd3c8bc125023ac990ff71254a06042a1c0b3e1057f18e89592993a8086bb/merged major:0 minor:1144 fsType:overlay blockSize:0} overlay_0-1145:{mountpoint:/var/lib/containers/storage/overlay/e6703d6d15f4d519a4b4991909425a42ce7f67cc59433cc949e0424ac94296d9/merged major:0 minor:1145 fsType:overlay blockSize:0} overlay_0-1147:{mountpoint:/var/lib/containers/storage/overlay/b5c68225e323c02d5ebc7e28d1b63c8c5d4d2c682d2304e0d7b64ea26b84948c/merged major:0 minor:1147 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/db28b264c1eaa6cb56e366ea5602ec7dc64ce0b8e10f2e12e042fc9b3dea1083/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1161:{mountpoint:/var/lib/containers/storage/overlay/675d894771205b3d13998ce8fe8b72a515374d7d71e3de1686bfc3b1bf551709/merged major:0 minor:1161 fsType:overlay blockSize:0} overlay_0-1163:{mountpoint:/var/lib/containers/storage/overlay/af4eed13b7b38e335fd04256b451b772e4acc98768ab60c803b30a151a6cebc3/merged major:0 minor:1163 fsType:overlay blockSize:0} overlay_0-1169:{mountpoint:/var/lib/containers/storage/overlay/3e54d0ac41094476f87cfe8a05f283a14419aed3e0472b51e9ac5365018dfe1c/merged major:0 minor:1169 fsType:overlay blockSize:0} overlay_0-1173:{mountpoint:/var/lib/containers/storage/overlay/65dc8570fabee296a88ca2066a6c6979e95c01b895f0efc9a23c7d5a813e3fd3/merged major:0 minor:1173 fsType:overlay blockSize:0} overlay_0-1189:{mountpoint:/var/lib/containers/storage/overlay/6bddc8dea962889a5ddcffbdce0a2f6e75502a94d40ec40a9e463e41def805ff/merged major:0 minor:1189 fsType:overlay blockSize:0} overlay_0-1191:{mountpoint:/var/lib/containers/storage/overlay/30c879882d84093d2f18026c4060cb47c35913e09eab05385d2981cb359d611c/merged major:0 minor:1191 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/417ececc38349cb63e0b761c3b9b3e83c9791d9a2c89d3ac6993a22635d5384f/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1195:{mountpoint:/var/lib/containers/storage/overlay/5da5b3c612e803980db900cffefb8760789e7f0f20fdf7fa7e80b526d45b5029/merged major:0 minor:1195 fsType:overlay blockSize:0} overlay_0-1197:{mountpoint:/var/lib/containers/storage/overlay/75060b164268cd99675aed0aba8907daf0a2c4e96d6e29ce528b077bf8e32bca/merged major:0 minor:1197 fsType:overlay blockSize:0} overlay_0-1203:{mountpoint:/var/lib/containers/storage/overlay/10a0784fb69ae2bc0110873b764c12f69f2e554565617f8523720ed2a650c98a/merged major:0 minor:1203 fsType:overlay blockSize:0} overlay_0-1208:{mountpoint:/var/lib/containers/storage/overlay/ae8876c1af6cd37baff7f7a0b58095c3c2c532f9dd68d6b542dc8b20386983ef/merged major:0 minor:1208 fsType:overlay blockSize:0} overlay_0-1210:{mountpoint:/var/lib/containers/storage/overlay/bcddb249d4d6d5abde2956c5c462ed957c850cf774ef499c7ff50f09ef3d3c3a/merged major:0 minor:1210 fsType:overlay blockSize:0} overlay_0-1212:{mountpoint:/var/lib/containers/storage/overlay/7c1f39ef38bb64c9677af0df3e122cab50e28301ab10c3be40f7ea851f26ea5c/merged major:0 minor:1212 fsType:overlay blockSize:0} overlay_0-1214:{mountpoint:/var/lib/containers/storage/overlay/01413ed4c30add9fa83b8845b4eb9a717c988e4a9ba05576e702e49b7e8053b7/merged major:0 minor:1214 fsType:overlay blockSize:0} overlay_0-1233:{mountpoint:/var/lib/containers/storage/overlay/dbb4c545ec9a513d13519b1d2916f29bd9d7397ad64f9c7cfffb0f74b7ba12e5/merged major:0 minor:1233 fsType:overlay blockSize:0} overlay_0-124:{mountpoint:/var/lib/containers/storage/overlay/ef5bd58eb0ad358ae1e1366823e52735a9c2da31c6523b4d529a64003363c6ea/merged major:0 minor:124 fsType:overlay blockSize:0} overlay_0-1243:{mountpoint:/var/lib/containers/storage/overlay/31dfcb3b748c6f810f67792a4316192a70fa89a49159323469cf5dad0bbd0a9b/merged major:0 minor:1243 fsType:overlay blockSize:0} overlay_0-1245:{mountpoint:/var/lib/containers/storage/overlay/5161242d90f8a36c787c8c5cbef00b3f34e0e943f75b0130508a435b77ca67d5/merged major:0 minor:1245 fsType:overlay blockSize:0} overlay_0-1249:{mountpoint:/var/lib/containers/storage/overlay/b59fd1ae09d5c45916442937107dd099ab580ec1d18bdfc15ebc4e203260983e/merged major:0 minor:1249 fsType:overlay blockSize:0} overlay_0-1251:{mountpoint:/var/lib/containers/storage/overlay/37cf0990d4a44b2d0f4c3a2eb220ea70aa44a5575cecec27108421bcbdbaeb15/merged major:0 minor:1251 fsType:overlay blockSize:0} overlay_0-1253:{mountpoint:/var/lib/containers/storage/overlay/2a754f5e3712e07bcc2c79351887853b0ae97c035167e7c610872252e7544a63/merged major:0 minor:1253 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/3a22c4cdf61b84fec2cb9e5a85bc350ab2097e04f6793ecaca65d3228b287a35/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/f4a9d268cf636cb09fd9194375777d2be4ab675ec6be26ceeda50ae7e31db2d6/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/4c1a93970a27e6ff2b780eba56d77c18d0e24b8d3167450fe1cc9e9906ffe78c/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-144:{mountpoint:/var/lib/containers/storage/overlay/1d9a07802e797c42a92202567e6776d286f2bf52a2a3cc7a6341e1f1c29eb632/merged major:0 minor:144 fsType:overlay blockSize:0} overlay_0-146:{mountpoint:/var/lib/containers/storage/overlay/c7736f2dd9e62e9b9b7deed044c391c33db2fe9f42042828ec3b149b8f0dbcd2/merged major:0 minor:146 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/120da911464c1fb225cb608d76b4a82916c33cac96a69e3e9a86bc3157fb10e9/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/7e22b4931d2b3d6acab6b8cdf0ee4cc2af80fe5e0a51feb9bcfc4f8ec9a08526/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/9236aa71b7b38f4bff4c386541400712ff4a86c5ee2aad3a59abbeb721709243/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-166:{mountpoint:/var/lib/containers/storage/overlay/8a8321cb932b4a522658232dedf3acbbc58acfb292a12e866005fbd79c81d391/merged major:0 minor:166 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/6f8d0ff5f4e7c5e5a5dcf8f000a18273128e6517cdecfdfb8a2f6075acbe4fd2/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/c660ac3557d781d61a3c882878e54a8cf86a2746440d0988edf943c2ddfa9a65/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/8236d22cbdd2919407b45e0d0568eda372d4f537bfa3ef9cacd9cab7c47a50dd/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/f2eb61001221c480748a3495da35b68803f47bcce4cb08d0100367e6b89aebb0/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-191:{mountpoint:/var/lib/containers/storage/overlay/f70fdcf95efa88a65471e05e70e186a76e212181a7df83317a1f36ede95aa12c/merged major:0 minor:191 fsType:overlay blockSize:0} overlay_0-193:{mountpoint:/var/lib/containers/storage/overlay/4bcdf6360724da0671b5a37769450c665e1215c51d7cc9e81fc8fb454dab693a/merged major:0 minor:193 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/643ac61b6e9c7dee37c5b170372727dd86a2daa5bdb22c81f453640482a5ea1d/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-200:{mountpoint:/var/lib/containers/storage/overlay/ef68c7290f7af07741cdcb315e6a4485e7b1f5f555c56ef741e8925950486f6c/merged major:0 minor:200 fsType:overlay blockSize:0} overlay_0-211:{mountpoint:/var/lib/containers/storage/overlay/c3de0e27d9cef0462673f0b35d281abf75047258bbd17804b232b6ca877e1e91/merged major:0 minor:211 fsType:overlay blockSize:0} overlay_0-216:{mountpoint:/var/lib/containers/storage/overlay/7c9f94dc7fbbd3a44213f7ea29328ce15650e2d2b3389013c4df3ac6a4e009fb/merged major:0 minor:216 fsType:overlay blockSize:0} overlay_0-224:{mountpoint:/var/lib/containers/storage/overlay/c85acc172b910a6d6388b23270b22ea8747398e06f1eacf1f277c299684b21c7/merged major:0 minor:224 fsType:overlay blockSize:0} overlay_0-232:{mountpoint:/var/lib/containers/storage/overlay/d62aa754c6b5995786c31c420c63322e01df029e299f577c6813b361ce23f13a/merged major:0 minor:232 fsType:overlay blockSize:0} overlay_0-234:{mountpoint:/var/lib/containers/storage/overlay/c397723290e3f49a0389eb76d4d69586e59ffa0aae5e3e92046abbf031d609f1/merged major:0 minor:234 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/9ebaf5d8e34f860c47e435c286cd18c600d76ca1649e058acae48c861175d8ed/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-249:{mountpoint:/var/lib/containers/storage/overlay/570123ea416f17114fcae3f490c7dbf33043a4819a3718186d91cff21bcd5236/merged major:0 minor:249 fsType:overlay blockSize:0} overlay_0-253:{mountpoint:/var/lib/containers/storage/overlay/5eaddb607223f15f2ea04de0bb0995271842f477f5872a3de93a11529d6bf75c/merged major:0 minor:253 fsType:overlay blockSize:0} overlay_0-255:{mountpoint:/var/lib/containers/storage/overlay/3bb54f441f045f86369389fb372525629fd421eddd06a04d5c9a0fe07a046c82/merged major:0 minor:255 fsType:overlay blockSize:0} overlay_0-257:{mountpoint:/var/lib/containers/storage/overlay/7f8375596b2ab5f559ff224c5046dc2687020aaf5c6f3c99995f0359e52c3dcf/merged major:0 minor:257 fsType:overlay blockSize:0} overlay_0-259:{mountpoint:/var/lib/containers/storage/overlay/b0b62acafc8b64d276c85ddd41e6da46ad3ea7be162f750e59750002ed3796b1/merged major:0 minor:259 fsType:overlay blockSize:0} overlay_0-261:{mountpoint:/var/lib/containers/storage/overlay/55ef4d09ca72458597f372804dedc522cb9d0db75675cba15e22f5bcedfff899/merged major:0 minor:261 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/6034f939618f6eba815ba5c8463be85faf0a54eb7ece5e1d019569944d2c9985/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/9ff415c0e21f491c3f6dc00d577cca2e55b098dd69c0bde3bc298a22ced5b5c2/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/82fb5f275cf8014b40d9050cd51b807daac3a9ffb1901c48a30c518eeb697709/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/b4eb71cc5d480b52f02ce8d4197ca15855c3b671d76b58780683f8b0e66eec7f/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/46c57282d826f47312deeeffd36ab3b27aac2891e427965b24918c24456e25e6/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/7e3d993573166c6dfeaeda34d2ec326f42debd978fd83b566228ce108e305175/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/6841c6bdfd7afaf50c2fc1c3ed441a283b2c85e9e3f02f53019e303f0b79dbb2/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/7883ab83040058df480bd758bd358a20363ef999571f5222f4e55070ead30f51/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/40bb7f8f3c7dde87fc169e9434774ff557ac766e814a0cbd17bf49aa4c3cb7d2/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/898c26409d33ce592b3bb2d8f8752f33926f0c411e85c4adb7a68bb03f8f913d/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-361:{mountpoint:/var/lib/containers/storage/overlay/7245b5d9224ee258d6812600ffc318255c4cc770d310e741279c59cf5a91944d/merged major:0 minor:361 fsType:overlay blockSize:0} overlay_0-363:{mountpoint:/var/lib/containers/storage/overlay/2a850f303f79ef1e62f95c5433135f942b55006174eab5a2fa3379971fcbe6c8/merged major:0 minor:363 fsType:overlay blockSize:0} overlay_0-365:{mountpoint:/var/lib/containers/storage/overlay/9aa63b7e4ee62240ee8449c093cab85360c155aeeaf18af63fcfd5c5802a6223/merged major:0 minor:365 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/e267232cfd0b4a028bfb248dbd044065a08e82e79956aa7fd4ed080927b49ef0/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/cd5abbc0efe2c9082f2ca9018b4b7f8233234882023bfa08ef7a15d144229541/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-371:{mountpoint:/var/lib/containers/storage/overlay/54fbfb49adcf1d2bed25e32da3905b88f6786110aa42f9ef09e20c75d87902de/merged major:0 minor:371 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/c631e42820e52656e39f7338145e294a5f95acce47879c81800c3840c2efcf33/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/3e46a78c0d62a772a36dc76781da99c56f011629038be623eb3805c57bcdf781/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/b2d2691fc292ac85d76ba572ab85db558599e50c94f673708dd71a90d9c4d94e/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-394:{mountpoint:/var/lib/containers/storage/overlay/fb630c688f0129c3595af1bdf2940a05de55c8354f79114fcb57433e31ecf202/merged major:0 minor:394 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/0ba5d2e749c418195bbd86b7011cb4e626ce5d0486731178a9b0afe1f888c2cf/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/531a6c0084db3c5710c0236359678b1821c23a0fca47d0698c32eb2655320c4c/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-404:{mountpoint:/var/lib/containers/storage/overlay/9b918dca0032b80d95fcce139ce40af75bba6111326d8618f310a5d740513443/merged major:0 minor:404 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/877515311b6a9ef9f525c12c010e0bbd8dbf9ca98e9c6c50e75f647c0aca1626/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-412:{mountpoint:/var/lib/containers/storage/overlay/604dfd7b7605b1e69e8606dce1fd92b4fe56b2df67a63e2515b21c605b4b5487/merged major:0 minor:412 fsType:overlay blockSize:0} overlay_0-417:{mountpoint:/var/lib/containers/storage/overlay/51a67387dfe6fbf852904778ec39368b30a3c2a181bc5a671b5be5b0803465d4/merged major:0 minor:417 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/19003b0765a57f2ef7c5fa21f68559c7c265dcccf7f337207e936c861a32793c/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-421:{mountpoint:/var/lib/containers/storage/overlay/88fd3dc9f53c8e1592077f3e5ff4f16b5f17368e0e9c22a90220382a718fd31e/merged major:0 minor:421 fsType:overlay blockSize:0} overlay_0-426:{mountpoint:/var/lib/containers/storage/overlay/822ad09833e366f25a85022e5837847930a70d263058323cf603b27035592ed8/merged major:0 minor:426 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/98b2f2be4513a991d70f414b73a734d66d5685b5f49f9a53c16943473f213793/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/5440a1629c79b362f0b8552f471ae74bfc38115ee1af1a21896e49b196110a54/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/e295e4c6af66d9427533b6bf5f629567bf09054d28c930339077ae88f15dce76/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/85573e6fbdacb7f327bcd33efd3dbf1829ae73d342af221d937f25dd0349bcde/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/d074fcf4d7bd85ef3dbd2642ac6cfe6827a24ce74dad9ceca2d99e780712e482/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/e416a29bc36976a3ee4bc38acc43d7c2b23579fa3130f2ec47a88c193c2d27f7/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/706046141a4d2414f0763a6576184f9e04d6f1aa3e8a8c7f2c807e0c19f94727/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-443:{mountpoint:/var/lib/containers/storage/overlay/3aa744ed8a45f06a305222f7dde76595bbd6bbe4f7c4167799bc88d42b924659/merged major:0 minor:443 fsType:overlay blockSize:0} overlay_0-452:{mountpoint:/var/lib/containers/storage/overlay/1585d8ff378bd1ddf1cf0937df5018fd5225b051668608f41d03b1f27542f7ed/merged major:0 minor:452 fsType:overlay blockSize:0} overlay_0-454:{mountpoint:/var/lib/containers/storage/overlay/0ee49ea15438ca07cf583418ee502e7102119cf4b36a5554eb90d14b26a56652/merged major:0 minor:454 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/b18cd429d4a66537fd153fd8ecb97e803665451db70e14235f64ca691fac17c9/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-465:{mountpoint:/var/lib/containers/storage/overlay/8ee0f535db452d3e13883ece63be5a55e4cbe610c9e8cc22f32e29b55df342f8/merged major:0 minor:465 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/5f3ec83717682a7629c5ac22403b086e0184c4425a2d20c234cb5a4ba67343ba/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/4f8f9f00dbb702cbc7cc42be00c43f88738c14192025b7cb0c350830ef8c33d3/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/a72598a2554e6517aa5e970e9f3fa46391ff7e27e7162c6f65c0790af8034c31/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-481:{mountpoint:/var/lib/containers/storage/overlay/36b36d8fe54ee6f539001ab2202ed92c0e925645a862672192379c500e8cf2d8/merged major:0 minor:481 fsType:overlay blockSize:0} overlay_0-490:{mountpoint:/var/lib/containers/storage/overlay/5f5deb825524d36ee9ab09ddba7446bbee47b817d0fe6e33a4115efbaabeac47/merged major:0 minor:490 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/488086e8c64add2ecd257ffab8a752eaf75cce567f8aed2971005ae913298389/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/799fa76d33fd8ff52036678529ff4ccd079f7df8186b96792849e5a68ee08f26/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/1c47bb923f52be179e52db029396083430f72a0e746ade48706da8aca11af59c/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/a19a74980508982166a1c9831283ba9c7ec714cc4261d1d28150f1fcb3925a6b/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/a475d1516fc69d6181b704744e3eb6b63bc7d800f76c49e12703d607319dfb54/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-529:{mountpoint:/var/lib/containers/storage/overlay/d3146bcf6a6f28028a7d5973227e0cdafc9b8c5dac97c8d60282b59d20422e31/merged major:0 minor:529 fsType:overlay blockSize:0} overlay_0-531:{mountpoint:/var/lib/containers/storage/overlay/857a8795f6b9da27e13b50c2a4dd8b310a6155e67c90b7a8b35c61ecf58270aa/merged major:0 minor:531 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/267875c2922e3bcaab057d0bcbcf5de96d451a8285ac5bd31f40c75352f2514f/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/b83aba312433db8ea67c2de11656bfe05a09c4fdf344f1dbef05d83642a37a1b/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-551:{mountpoint:/var/lib/containers/storage/overlay/c40e5a523ab45adcb9f7d975e0ae09e3d073a64411a5c1af3455d858f7810a56/merged major:0 minor:551 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/1735b655eee55b45f27547058c7242f242fe58d5464c249c2954751fb8382bc5/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-555:{mountpoint:/var/lib/containers/storage/overlay/8362f7b44fbfc006e58dea5f534d0228a3bbc1da2dde99168512baa7a638c352/merged major:0 minor:555 fsType:overlay blockSize:0} overlay_0-557:{mountpoint:/var/lib/containers/storage/overlay/23c85c1df61c80c4beccabae44a066f28c144864517ad158f3d5ed915578009d/merged major:0 minor:557 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/93518c3465c1dce79272bbac9f7c78cc2ca0c2bc0f5404db63df0ff40809a143/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-565:{mountpoint:/var/lib/containers/storage/overlay/e707eac7fad33528210d073814e871566e081f0cae125b2f9721536a482f2eab/merged major:0 minor:565 fsType:overlay blockSize:0} overlay_0-567:{mountpoint:/var/lib/containers/storage/overlay/2dac1816881c8d9f9aed855fee3b2d0e219586cd9a69a3448ee92ec3faae8a6f/merged major:0 minor:567 fsType:overlay blockSize:0} overlay_0-569:{mountpoint:/var/lib/containers/storage/overlay/53b9737868e231722db4c081f81079d82bb9039b54b47414ff555b3359fdd891/merged major:0 minor:569 fsType:overlay blockSize:0} overlay_0-581:{mountpoint:/var/lib/containers/storage/overlay/27168da65ff134af7f5e3e2b84ef3d98de2d934352ca028e4ae505b0778ba68d/merged major:0 minor:581 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/416ff591624a01d34ec79f35d9d2b5c5975b90baae806dfb54958b0f78823457/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-602:{mountpoint:/var/lib/containers/storage/overlay/39dcc3bd010e2294f947859e6d5fde49a9a39875cab076dd9bd28e61834859fb/merged major:0 minor:602 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/ba732b4bd85147f0276fbbcfab158f0528bff3e57874d7143762df30bad7d7d6/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/e874ee8219cc9ff1fdfe79f413324bbe5bedd5d48ee498115586d3ae041cc8d2/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-625:{mountpoint:/var/lib/containers/storage/overlay/09869cfb9f4ea4c56e0e50795aa2cfbffdf682c292b2228d9842c0b809240897/merged major:0 minor:625 fsType:overlay blockSize:0} overlay_0-627:{mountpoint:/var/lib/containers/storage/overlay/a2cdc7cb824a8fce1e50c5da5385cbd337b8e10c6cf20e25cf61aff29102bfc3/merged major:0 minor:627 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/a7080341a2243294dce5dbff74a07b1b30701b0dce9c993d3de471c86653e77a/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-643:{mountpoint:/var/lib/containers/storage/overlay/5d5863b5008a6bc2a7c235a35f535823c4dfb1de2ad21c1bcbeb34d3cc8bebdd/merged major:0 minor:643 fsType:overlay blockSize:0} overlay_0-647:{mountpoint:/var/lib/containers/storage/overlay/21a8b7676618a3664ae0d7d533d8b0f7a2e9ae271438ee76342feeb04e70e2b3/merged major:0 minor:647 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/c883d7c8dae5a3ab9639a507b434ccfb65e0823dfea2aa2e54fbe48317b92926/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/00b750e785e9297b8dcc513da9b6ab22e3d7a28cb5958e1d67fa9aad700a7ac8/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-667:{mountpoint:/var/lib/containers/storage/overlay/66d13c289d5da3fd6c0d8b49e9e153dfc63defee55ba0c6c452263d9703c6751/merged major:0 minor:667 fsType:overlay blockSize:0} overlay_0-669:{mountpoint:/var/lib/containers/storage/overlay/d8633b796095ffae9072e88665b78932115b08901c000783bff5db88b4b068c9/merged major:0 minor:669 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/bf25ab862ebf527b440f523e13f08abdd24275b3de8f4cbca32963666b747972/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/95d763ae3fbb95a47eb52654dde8857c77469a894852295e29c322d1b6a0fd7d/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/99af3ef05db355f1f5685f9cb41c6db7e40692fffa3f018addcf26dd62cc0578/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-692:{mountpoint:/var/lib/containers/storage/overlay/a9e72cceece7927562e1e15817d4ebd6a9a5c9ff6c94b1234b335eb88b194fc7/merged major:0 minor:692 fsType:overlay blockSize:0} overlay_0-694:{mountpoint:/var/lib/containers/storage/overlay/3cc0b8bee522c097f80ed2b324efbf514361b4708d2bc3eb10f5335aeb5a2a6f/merged major:0 minor:694 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d089e7c37e00822be318c2bb74aaf2f77d7ecee10aef584e539a3b2526dd2041/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/5097b1e13d8a06632c198172108b2203e22f0c60b3a4a468643f1ad35b3a5f6b/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/6c3009ed58bd1b346cdc8a5bfd108aa2f8a8c9154dae49ddd564e3d0f6758567/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/e1bdae666ac8fcc8dec5baed63dd9878af9a7e68c68dde5972dfda1f057bcf77/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-735:{mountpoint:/var/lib/containers/storage/overlay/ee0c8130b8f98dbcf188e08bc138ea5c226faa1b3fa8cc3a7615c17ae83dcd1c/merged major:0 minor:735 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/988ffebbe97a891a96ab12a2c06dc4f8124778daa14a6f37d377c4d5c55227b7/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-745:{mountpoint:/var/lib/containers/storage/overlay/8cf60d9f9dc15f851cd89f6bffcbbec6d8d524e7a863459654e8229889f59288/merged major:0 minor:745 fsType:overlay blockSize:0} overlay_0-751:{mountpoint:/var/lib/containers/storage/overlay/aa803f739846d71c2e292d69c0a2793615d4d4e10b3897d1e11b09ff7bcbeedc/merged major:0 minor:751 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/88be0fbb8a450e091ab4f04bf7722670797b28a46affbe6c3cb1ef7964eca249/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-771:{mountpoint:/var/lib/containers/storage/overlay/75e645c1a0e8b0d7ab375d9fa6d21355358934fc11b5b1573693d17509eec9ea/merged major:0 minor:771 fsType:overlay blockSize:0} overlay_0-773:{mountpoint:/var/lib/containers/storage/overlay/8001cd69e679914ed8b1d970df2a7b502bdff5d6794cffb34408fa8f562f33e5/merged major:0 minor:773 fsType:overlay blockSize:0} overlay_0-777:{mountpoint:/var/lib/containers/storage/overlay/d930db0eab40073de60b54488569d9bd1e969502bc567c0bdff8f20ae7d22bc2/merged major:0 minor:777 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/a5bafd7e866f6c58fd82b57589c5b869b304fb2c6737ee07dc02678a878c4102/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-785:{mountpoint:/var/lib/containers/storage/overlay/3b15eac1c592bc3a1d276ec1ab34ed59329a2bd344d0746d9f9f4959db19ddfb/merged major:0 minor:785 fsType:overlay blockSize:0} overlay_0-787:{mountpoint:/var/lib/containers/storage/overlay/d4687ed917e418c9e4094e7f6966dfc44af92ebbfbd8233c57682ffbfabf6f3b/merged major:0 minor:787 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/5e0cb223d2ebe763d92633c689e030f91695941b1635b887ef976a086eb721af/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-796:{mountpoint:/var/lib/containers/storage/overlay/8d54778c4413b5055a71782f8a1d8e2266bdf1b820b98061860a68abdd83a001/merged major:0 minor:796 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/8991631ecb8b3af4e6a69a7f14f6e1cfee3c0eddb6514dfcec02882192638166/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/cea05c989dda229cd6e7d5c2efaa9641a89a5d366cade8d46bd1d1b23f545317/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/cad5dde7f839db38d3e70fa72e5ac6479c4ceef28c759f454266e9b08121bc9b/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/92e0c6a470245fcb1b6c01d20f58fd617f6b689550dc0dbe6b7a5b1ad5b3ccf0/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/91404b6b927b2b2bd2d70176dfde94b43f866828c1dcbd226b677b6faf5d59cf/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/da4bb9a84aaed576800c1451450acf97d772876f8c6deddfc2ccb3880421a822/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/48f35566bfb09ba69c9a56540a02e44d025ba74c08d920c30d99d971f5bac91f/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/a6b925a99bbdf12b3c175afbb1af7cf65cb5267b2d35352cf5c9f6c1ca796d57/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/d72658828d852e48d2b39772253f5170a31d81f28a86bf619c07b004fceb052f/merged major:0 minor:887 fsType:overlay blockSize:0} overlay_0-891:{mountpoint:/var/lib/containers/storage/overlay/ddf88d3cc23a9b26f32b422e16e5488c648d3bc515883d2c0caa77505ba349c3/merged major:0 minor:891 fsType:overlay blockSize:0} overlay_0-893:{mountpoint:/var/lib/containers/storage/overlay/489065609c31a33ce3d331983860471be3bf4f43d7a5abee62073e4766aa9d79/merged major:0 minor:893 fsType:overlay blockSize:0} overlay_0-897:{mountpoint:/var/lib/containers/storage/overlay/45f5097ab063937122e082a4eec6a28a91359253669277eff8fe39837e010553/merged major:0 minor:897 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/c93d6c82092a3ba4d7449842f2faac444b5cc6c145581343b27470589371c37a/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-903:{mountpoint:/var/lib/containers/storage/overlay/8d24c83da7f12e138ddf00a67dddb8a2f5d4181a103eb6ec843104db91a75778/merged major:0 minor:903 fsType:overlay blockSize:0} overlay_0-925:{mountpoint:/var/lib/containers/storage/overlay/aaa80fc27652b4960926bd09d7f75a71c27822ab9ad28f975906feb4ca41743b/merged major:0 minor:925 fsType:overlay blockSize:0} overlay_0-932:{mountpoint:/var/lib/containers/storage/overlay/1821790f372a1486fc0b76b742fb3cca19b2692dcccf2c867280d8d6ec3d02a4/merged major:0 minor:932 fsType:overlay blockSize:0} overlay_0-947:{mountpoint:/var/lib/containers/storage/overlay/a1cf242c1beffcc9f647cefd8fef90d6bcb19fe5290755a2714578648efd0643/merged major:0 minor:947 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/a271935570631164bab069b0390b68393882da148153a0a2ecbf5f9105acd2c1/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/38c14967a0422ba064a8ac99cb5d5ba9063714a6487768173c7ce997870d33ce/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-958:{mountpoint:/var/lib/containers/storage/overlay/94bdf9fa6eed66b290195c261f440421aeb0e12e33b46a5b1ba76ca7538eddd9/merged major:0 minor:958 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/4f466c300c60100719becb6b4911b38dbde7fe02ef827a0f6b119cc7f4e349bb/merged major:0 minor:960 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/f3fe30309a83ff0f39d86249dec2bcb77c372df8ca9e60f36130b7c1cca156a3/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-987:{mountpoint:/var/lib/containers/storage/overlay/d8aff7df0bf4ecc1fbb832eb8110e3aa3b64c7c46ed18aa5de4aab4078cd7ce2/merged major:0 minor:987 fsType:overlay blockSize:0} overlay_0-989:{mountpoint:/var/lib/containers/storage/overlay/6b91210d7ab4b119756deaa3e253eb2c693c03bd42dd6a3d014085f7a3f81c74/merged major:0 minor:989 fsType:overlay blockSize:0}] Dec 03 13:58:29.724273 master-0 kubenswrapper[16176]: I1203 13:58:29.723138 16176 manager.go:217] Machine: {Timestamp:2025-12-03 13:58:29.721882911 +0000 UTC m=+0.147523593 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5051321c-b7a7-4bc8-b64a-b5b2f6df7e9d Filesystems:[{Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~projected/kube-api-access-djxkd DeviceMajor:0 DeviceMinor:306 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:430 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:610 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-932 DeviceMajor:0 DeviceMinor:932 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b27a5bf76da5bbfe79e4cde4e4ec10d9c8fb9d7c32e2d0acb5526773cb73fa83/userdata/shm DeviceMajor:0 DeviceMinor:995 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1208 DeviceMajor:0 DeviceMinor:1208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4f80cc3300a6faa50f3c9cfd432aadbd14664bd22fcda28d52a8f9974c24555/userdata/shm DeviceMajor:0 DeviceMinor:814 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~projected/kube-api-access-rjbsl DeviceMajor:0 DeviceMinor:984 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:297 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4dce4931560a1d11b8166404560a3acca2a0b05eeea2480e60249b2b19ab9889/userdata/shm DeviceMajor:0 DeviceMinor:524 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-989 DeviceMajor:0 DeviceMinor:989 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:1000 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:638 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:1011 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1041 DeviceMajor:0 DeviceMinor:1041 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/486d4964-18cc-4adc-b82d-b09627cadda4/volumes/kubernetes.io~projected/kube-api-access-m4pd4 DeviceMajor:0 DeviceMinor:808 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:688 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~projected/kube-api-access-tfs27 DeviceMajor:0 DeviceMinor:300 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/939b2c92bcdf7cd1a7639905546ba592e8fa9fac9978494aea2a13c1b29704e8/userdata/shm DeviceMajor:0 DeviceMinor:520 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-569 DeviceMajor:0 DeviceMinor:569 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-602 DeviceMajor:0 DeviceMinor:602 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1039 DeviceMajor:0 DeviceMinor:1039 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1067 DeviceMajor:0 DeviceMinor:1067 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1127 DeviceMajor:0 DeviceMinor:1127 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~projected/kube-api-access-dwmrj DeviceMajor:0 DeviceMinor:972 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfdb08e2c2d86dfcd1635e2f3b21f970adbd160aa3b866a772beff85b82f4e9c/userdata/shm DeviceMajor:0 DeviceMinor:189 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-551 DeviceMajor:0 DeviceMinor:551 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:709 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e90ae89b899f7c4adaa8b2c1c88e7171c1cb37b6c4cab4e7e1756faa4c54abf5/userdata/shm DeviceMajor:0 DeviceMinor:985 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-987 DeviceMajor:0 DeviceMinor:987 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:999 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1002 DeviceMajor:0 DeviceMinor:1002 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:1007 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1169 DeviceMajor:0 DeviceMinor:1169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-191 DeviceMajor:0 DeviceMinor:191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:299 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/98392f8e-0285-4bc3-95a9-d29033639ca3/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:461 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~projected/kube-api-access-5mk6r DeviceMajor:0 DeviceMinor:980 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:126 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:460 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:563 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-745 DeviceMajor:0 DeviceMinor:745 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~projected/kube-api-access-nc9nj DeviceMajor:0 DeviceMinor:908 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e9f484c1-1564-49c7-a43d-bd8b971cea20/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:977 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-627 DeviceMajor:0 DeviceMinor:627 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1c562495-1290-4792-b4b2-639faa594ae2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:290 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~projected/kube-api-access-fw8h8 DeviceMajor:0 DeviceMinor:302 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~projected/kube-api-access-jkbcq DeviceMajor:0 DeviceMinor:310 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:614 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:110 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1035 DeviceMajor:0 DeviceMinor:1035 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:296 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~projected/kube-api-access-7q659 DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/938a08c4d1aea74e9960886367790806d0ec8cf5d4c33d8d49b8a65ae6f45942/userdata/shm DeviceMajor:0 DeviceMinor:333 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-694 DeviceMajor:0 DeviceMinor:694 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-255 DeviceMajor:0 DeviceMinor:255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c0c28bf839b2e5e1bceddd001eae58acc3775691713261478c7742c6a0302aba/userdata/shm DeviceMajor:0 DeviceMinor:1030 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1069 DeviceMajor:0 DeviceMinor:1069 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8bff50a8699bca914ec79ab5b1ca3bdf66c5588c444f1b0bb6f8b67e98260e9e/userdata/shm DeviceMajor:0 DeviceMinor:336 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1203 DeviceMajor:0 DeviceMinor:1203 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-232 DeviceMajor:0 DeviceMinor:232 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~projected/kube-api-access-nxt87 DeviceMajor:0 DeviceMinor:301 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:307 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1251 DeviceMajor:0 DeviceMinor:1251 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-224 DeviceMajor:0 DeviceMinor:224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:313 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-454 DeviceMajor:0 DeviceMinor:454 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1249 DeviceMajor:0 DeviceMinor:1249 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27a9c385ef13072222db2fbae2957d6a0f6b0dc3cf6ddba3e51ba6e2d32e6d95/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/55351b08-d46d-4327-aa5e-ae17fdffdfb5/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:580 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8154ea1badaee93911650d5b6c9a0d50ee5f865cc92efee68e3e567a26fac336/userdata/shm DeviceMajor:0 DeviceMinor:1029 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/03494fce-881e-4eb6-bc3d-570f1d8e7c52/volumes/kubernetes.io~projected/kube-api-access-6k2bw DeviceMajor:0 DeviceMinor:378 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:915 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~projected/kube-api-access-cgq6z DeviceMajor:0 DeviceMinor:316 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-555 DeviceMajor:0 DeviceMinor:555 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl DeviceMajor:0 DeviceMinor:739 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/09296e49349480369110af144596d7185a5c6f4d0eac0845480367f8485c6e23/userdata/shm DeviceMajor:0 DeviceMinor:837 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2/userdata/shm DeviceMajor:0 DeviceMinor:986 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c180b512-bf0c-4ddc-a5cf-f04acc830a61/volumes/kubernetes.io~projected/kube-api-access-2fns8 DeviceMajor:0 DeviceMinor:309 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/kube-api-access-x22gr DeviceMajor:0 DeviceMinor:317 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-667 DeviceMajor:0 DeviceMinor:667 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj DeviceMajor:0 DeviceMinor:623 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/608e5faf1d1f7ffd467c7714def83c802d4d5d7a97b5dd1c6daac1ec34f49d3a/userdata/shm DeviceMajor:0 DeviceMinor:528 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-785 DeviceMajor:0 DeviceMinor:785 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~projected/kube-api-access-9cnd5 DeviceMajor:0 DeviceMinor:1022 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-897 DeviceMajor:0 DeviceMinor:897 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1144 DeviceMajor:0 DeviceMinor:1144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1245 DeviceMajor:0 DeviceMinor:1245 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf9eaca9ad61c4a7a095f39cead558e140c3f36068b2d37492a50d298cef2968/userdata/shm DeviceMajor:0 DeviceMinor:515 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:616 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:975 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1212 DeviceMajor:0 DeviceMinor:1212 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-771 DeviceMajor:0 DeviceMinor:771 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~projected/kube-api-access-2q8g8 DeviceMajor:0 DeviceMinor:911 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-257 DeviceMajor:0 DeviceMinor:257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca1230f4b492fd13fa8365a33466faeb6cba6f259f3b7f061433306ec990355a/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1253 DeviceMajor:0 DeviceMinor:1253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1173 DeviceMajor:0 DeviceMinor:1173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:156 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/79a4ce4fa1bb86b3d2f2841576cb8183eb88487183d1482128b3ccf54e4a6592/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-893 DeviceMajor:0 DeviceMinor:893 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:398 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1071 DeviceMajor:0 DeviceMinor:1071 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:93 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1129 DeviceMajor:0 DeviceMinor:1129 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d38d102-4efe-4ed3-ae23-b1e295cdaccd/volumes/kubernetes.io~projected/kube-api-access-v429m DeviceMajor:0 DeviceMinor:380 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-426 DeviceMajor:0 DeviceMinor:426 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/36da3c2f-860c-4188-a7d7-5b615981a835/volumes/kubernetes.io~projected/kube-api-access-jzlgx DeviceMajor:0 DeviceMinor:431 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfbeecf9d9d844162fe5ae1358a1949d1abd819e4ec98b8cfb9e501a9f09c12e/userdata/shm DeviceMajor:0 DeviceMinor:400 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:765 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d/userdata/shm DeviceMajor:0 DeviceMinor:811 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-259 DeviceMajor:0 DeviceMinor:259 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:311 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-669 DeviceMajor:0 DeviceMinor:669 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-796 DeviceMajor:0 DeviceMinor:796 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-958 DeviceMajor:0 DeviceMinor:958 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:323 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/82bd0ae5-b35d-47c8-b693-b27a9a56476d/volumes/kubernetes.io~projected/kube-api-access-bwck4 DeviceMajor:0 DeviceMinor:548 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-947 DeviceMajor:0 DeviceMinor:947 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06a6bfff8d933d9670b8e8e8de6cfda51fcb359ae53ec1c55a93a9738f4fc201/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80e95cd74710420c097c7cf837380f44e3fef76745b76b26d24bb3a848d0ba8d/userdata/shm DeviceMajor:0 DeviceMinor:836 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-166 DeviceMajor:0 DeviceMinor:166 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b95a5a6-db93-4a58-aaff-3619d130c8cb/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:644 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1139 DeviceMajor:0 DeviceMinor:1139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/317e7dd62fa100db1d45ff57aba484e787374c6332b21d016a43057d248fc561/userdata/shm DeviceMajor:0 DeviceMinor:991 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49f87764fc511fdc3d85df01f4c3ded21c480f8f90f5b40b571297ddabf883d1/userdata/shm DeviceMajor:0 DeviceMinor:122 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-361 DeviceMajor:0 DeviceMinor:361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:462 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-443 DeviceMajor:0 DeviceMinor:443 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97/userdata/shm DeviceMajor:0 DeviceMinor:979 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1106 DeviceMajor:0 DeviceMinor:1106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/52100521-67e9-40c9-887c-eda6560f06e0/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:292 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:318 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c/volumes/kubernetes.io~projected/kube-api-access-nrngd DeviceMajor:0 DeviceMinor:326 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:832 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bff18a80-0b0f-40ab-862e-e8b1ab32040a/volumes/kubernetes.io~projected/kube-api-access-zcqxx DeviceMajor:0 DeviceMinor:761 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:127 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-211 DeviceMajor:0 DeviceMinor:211 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:464 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~projected/kube-api-access-zhc87 DeviceMajor:0 DeviceMinor:973 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1210 DeviceMajor:0 DeviceMinor:1210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:710 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:159 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-261 DeviceMajor:0 DeviceMinor:261 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b23778ca4e9ae4dbb3de59134916161ec83a634b903bdd6f9ff3c7980d2471f9/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/44ddc337512cf47184ee9f63cffcf4b3f72f69c2c567abe7ddd38b25975bdf7c/userdata/shm DeviceMajor:0 DeviceMinor:517 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:547 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1137 DeviceMajor:0 DeviceMinor:1137 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1191 DeviceMajor:0 DeviceMinor:1191 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:615 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-773 DeviceMajor:0 DeviceMinor:773 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:805 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2d9b8c691d5f3ee7b94a063c9932a9e9584dbd2cc766bb12c9c9139903e78355/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/50f28c77-b15c-4b86-93c8-221c0cc82bb2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:373 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/12a33b618352d2794ebe540e15ad19cf6feb41518cd952ee7771d4e774685a53/userdata/shm DeviceMajor:0 DeviceMinor:332 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-567 DeviceMajor:0 DeviceMinor:567 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-581 DeviceMajor:0 DeviceMinor:581 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:586 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1056 DeviceMajor:0 DeviceMinor:1056 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1133 DeviceMajor:0 DeviceMinor:1133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf859b5a264e6e297ea665f1887ffdaf1a0689d7640ff2f1e3f3254f07fa527e/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-490 DeviceMajor:0 DeviceMinor:490 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e359aa49722552596f9defcdc0a064ae42e30ac26237dbcecf3f9889e20a2fd/userdata/shm DeviceMajor:0 DeviceMinor:500 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-249 DeviceMajor:0 DeviceMinor:249 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes/kubernetes.io~projected/kube-api-access-8xrdq DeviceMajor:0 DeviceMinor:379 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:833 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd60d797c4fb6bbacd83a95102004f01bd67ec43516cde99335b0ab9b0c67773/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-216 DeviceMajor:0 DeviceMinor:216 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b051ae27-7879-448d-b426-4dce76e29739/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:304 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5aa67ace-d03a-4d06-9fb5-24777b65f2cc/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:314 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ebcff81d7a6c890b8f9349aed1a519a345baa59434656ca8aba0fb5ac7b28498/userdata/shm DeviceMajor:0 DeviceMinor:345 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-625 DeviceMajor:0 DeviceMinor:625 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a192c38a-4bfa-40fe-9a2d-d48260cf6443/volumes/kubernetes.io~projected/kube-api-access-fn7fm DeviceMajor:0 DeviceMinor:1122 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-124 DeviceMajor:0 DeviceMinor:124 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:157 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~projected/kube-api-access-rb6pb DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f2641a7c5c4699349154d341f479564ead3cd202754494a1163f896bbcf08b55/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-925 DeviceMajor:0 DeviceMinor:925 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4568b0000197ea509dbc549f285c717622711f0c697e5e0a5502e9e4faaedd8e/userdata/shm DeviceMajor:0 DeviceMinor:762 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3/userdata/shm DeviceMajor:0 DeviceMinor:813 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d25d34f61259a51a0bba1141bc81ca58437b24f94d8a1d86f6a0a4ba646442a3/userdata/shm DeviceMajor:0 DeviceMinor:149 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-404 DeviceMajor:0 DeviceMinor:404 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3eef3ef-f954-4e47-92b4-0155bc27332d/volumes/kubernetes.io~projected/kube-api-access-lfdn2 DeviceMajor:0 DeviceMinor:1021 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1efcc24c-87bf-48cd-83b5-196c661a2517/volumes/kubernetes.io~projected/kube-api-access-whkbl DeviceMajor:0 DeviceMinor:807 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-144 DeviceMajor:0 DeviceMinor:144 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/588f3f48138c2b3392a4eae817dfb25b4a6dd6a9f3ecf65d5033e45b842a15ed/userdata/shm DeviceMajor:0 DeviceMinor:561 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-417 DeviceMajor:0 DeviceMinor:417 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1197 DeviceMajor:0 DeviceMinor:1197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fb37bb90bb43ad3468c436c2a8fd1359b6b11fa1cf6e9efbe82545603bb55352/userdata/shm DeviceMajor:0 DeviceMinor:521 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1163 DeviceMajor:0 DeviceMinor:1163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1214 DeviceMajor:0 DeviceMinor:1214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1243 DeviceMajor:0 DeviceMinor:1243 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~projected/kube-api-access-jn5h6 DeviceMajor:0 DeviceMinor:909 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-146 DeviceMajor:0 DeviceMinor:146 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/956e8e5ddc763af6517c261e99db870a7367400fa001e86dc6d918a799e34361/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-363 DeviceMajor:0 DeviceMinor:363 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-371 DeviceMajor:0 DeviceMinor:371 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/volumes/kubernetes.io~projected/kube-api-access-wqkdr DeviceMajor:0 DeviceMinor:499 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-557 DeviceMajor:0 DeviceMinor:557 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/636d93d2bc5d6274a68744e7bb8286da893d7e599b6de981210f2789cc0fd2da/userdata/shm DeviceMajor:0 DeviceMinor:741 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~projected/kube-api-access-jpttk DeviceMajor:0 DeviceMinor:613 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:158 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:298 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-565 DeviceMajor:0 DeviceMinor:565 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-253 DeviceMajor:0 DeviceMinor:253 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1037 DeviceMajor:0 DeviceMinor:1037 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1077 DeviceMajor:0 DeviceMinor:1077 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:640 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7663a25e-236d-4b1d-83ce-733ab146dee3/volumes/kubernetes.io~projected/kube-api-access-ltsnd DeviceMajor:0 DeviceMinor:1019 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dce6001c167c8409f989caf47e1b207dca24bcb6708c937a6f68d9e6924ddc5f/userdata/shm DeviceMajor:0 DeviceMinor:1026 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-903 DeviceMajor:0 DeviceMinor:903 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eefee934-ac6b-44e3-a6be-1ae62362ab4f/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:645 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d/volumes/kubernetes.io~projected/kube-api-access-pj4f8 DeviceMajor:0 DeviceMinor:809 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:153 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fa6ec978459ecd037eb5e7ebf83c34ee3bad1cfd3630624998e9088ad7624e44/userdata/shm DeviceMajor:0 DeviceMinor:330 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-647 DeviceMajor:0 DeviceMinor:647 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j DeviceMajor:0 DeviceMinor:1020 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-365 DeviceMajor:0 DeviceMinor:365 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:463 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-531 DeviceMajor:0 DeviceMinor:531 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-735 DeviceMajor:0 DeviceMinor:735 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ccfb5a253e70da9d941ee8b81dec77e2d40360e47145ee2a3717b4f36f0e409/userdata/shm DeviceMajor:0 DeviceMinor:914 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:315 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-643 DeviceMajor:0 DeviceMinor:643 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:639 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82/userdata/shm DeviceMajor:0 DeviceMinor:890 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~projected/kube-api-access-czfkv DeviceMajor:0 DeviceMinor:312 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a83b648c669c68bd86ac13db4b39e42f8f2b76a3abef61ebc8f54734aad5803/userdata/shm DeviceMajor:0 DeviceMinor:327 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-787 DeviceMajor:0 DeviceMinor:787 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1147 DeviceMajor:0 DeviceMinor:1147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1233 DeviceMajor:0 DeviceMinor:1233 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs I Dec 03 13:58:29.724746 master-0 kubenswrapper[16176]: nodes:104594880 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3c741f860cc22e91172e5b117239280c554c86e375ed76735fad7037076b19db/userdata/shm DeviceMajor:0 DeviceMinor:432 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~projected/kube-api-access-m789m DeviceMajor:0 DeviceMinor:835 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a969ddd4-e20d-4dd2-84f4-a140bac65df0/volumes/kubernetes.io~projected/kube-api-access-cbzpz DeviceMajor:0 DeviceMinor:617 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:94 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-234 DeviceMajor:0 DeviceMinor:234 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-200 DeviceMajor:0 DeviceMinor:200 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06d774e5-314a-49df-bdca-8e780c9af25a/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:308 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-421 DeviceMajor:0 DeviceMinor:421 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e3ebbbd0ee7ca51929de3beceebd50f7b813cf02ddcf6e89d22ba8b987cb3d6e/userdata/shm DeviceMajor:0 DeviceMinor:458 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:736 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78dfd31a88f925b32bf9c0b8856a8693ab7bf23f18e8289b9863420889031b28/userdata/shm DeviceMajor:0 DeviceMinor:900 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-193 DeviceMajor:0 DeviceMinor:193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a4482da3b6269c37fd64ed4a723b3d1c0f7f294b123b00a40d321fec5fbfbd20/userdata/shm DeviceMajor:0 DeviceMinor:631 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-452 DeviceMajor:0 DeviceMinor:452 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6f723d97-5c65-4ae7-9085-26db8b4f2f52/volumes/kubernetes.io~projected/kube-api-access-wwv7s DeviceMajor:0 DeviceMinor:514 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/90d4314b3ecfe26b003e884ba46c85d035a4eed1d9c53c3b4088cb96f2f898e2/userdata/shm DeviceMajor:0 DeviceMinor:549 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/faa79e15-1875-4865-b5e0-aecd4c447bad/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:588 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:611 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/803897bb-580e-4f7a-9be2-583fc607d1f6/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:291 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:344 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-481 DeviceMajor:0 DeviceMinor:481 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b340553b-d483-4839-8328-518f27770832/volumes/kubernetes.io~projected/kube-api-access-92p99 DeviceMajor:0 DeviceMinor:974 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cb7a3f7dec078f2bf4b828c8816fc0b75ec1ac5572e46174696bef2e60b03393/userdata/shm DeviceMajor:0 DeviceMinor:1025 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1145 DeviceMajor:0 DeviceMinor:1145 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/783c92b7dc341bf0cb5e3bc7e8cf6deaa49a260e5c3e691e18ff63d38a53176d/userdata/shm DeviceMajor:0 DeviceMinor:381 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5dcaccc5-46b1-4a38-b3af-6839dec529d3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1121 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1161 DeviceMajor:0 DeviceMinor:1161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1131 DeviceMajor:0 DeviceMinor:1131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:111 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-465 DeviceMajor:0 DeviceMinor:465 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69b752ed-691c-4574-a01e-428d4bf85b75/volumes/kubernetes.io~projected/kube-api-access-t8knq DeviceMajor:0 DeviceMinor:612 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8f77acffa4c354928b006e5fc54b8bb8ec4679d888054e23f119227d23afda2/userdata/shm DeviceMajor:0 DeviceMinor:589 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f5231d5a4957175b3fcfcc4881d8e39cd60e6c7fb26105de567b4c9770b1dc9d/userdata/shm DeviceMajor:0 DeviceMinor:513 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-751 DeviceMajor:0 DeviceMinor:751 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1e6e9b5cc1123cc229e5b5c55833cf8c55b534df02d94f2822bf88d34528957/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-891 DeviceMajor:0 DeviceMinor:891 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf DeviceMajor:0 DeviceMinor:399 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-394 DeviceMajor:0 DeviceMinor:394 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c95705e3-17ef-40fe-89e8-22586a32621b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:646 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1058 DeviceMajor:0 DeviceMinor:1058 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 DeviceMajor:0 DeviceMinor:689 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1189 DeviceMajor:0 DeviceMinor:1189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-412 DeviceMajor:0 DeviceMinor:412 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99aab5d6addd41c622154cc6f270a6df7b17355eeaee15a1257331779d37b167/userdata/shm DeviceMajor:0 DeviceMinor:889 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:121 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8215ec6a2b5e179f68ca320150c8b99f411ed9a1c51d17df14a842a1716977d1/userdata/shm DeviceMajor:0 DeviceMinor:160 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b9a6d5be513374f316e04eb157797b0a16d4a0fedf4d3652d733cb3bb24509c/userdata/shm DeviceMajor:0 DeviceMinor:320 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1195 DeviceMajor:0 DeviceMinor:1195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff1b2a1ff9154238692ebb6e0ae688f400ae8b743c546d838dde5d5bc888fe8a/userdata/shm DeviceMajor:0 DeviceMinor:896 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/adbcce01-7282-4a75-843a-9623060346f0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:293 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-777 DeviceMajor:0 DeviceMinor:777 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:916 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-529 DeviceMajor:0 DeviceMinor:529 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b6df7eda176c098a26a27c75d63f11b98e6873c57201a5b483ce6015050d379b/userdata/shm DeviceMajor:0 DeviceMinor:997 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f32ab39360216ffc76839347f6e44e67ea8c080cbbd0cf86ff8f7a3187e463e/userdata/shm DeviceMajor:0 DeviceMinor:1032 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/baf8480d9e2390e6727c0d4fc8ed3cdbe4111310f815a1aee6d6f586fad1452c/userdata/shm DeviceMajor:0 DeviceMinor:812 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1135 DeviceMajor:0 DeviceMinor:1135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9870a8ca9abbc19dede5bbca4e6dd4181d32effc6bff035c970be30f43874cc5/userdata/shm DeviceMajor:0 DeviceMinor:161 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0535e784-8e28-4090-aa2e-df937910767c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:294 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-692 DeviceMajor:0 DeviceMinor:692 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:976 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1075 DeviceMajor:0 DeviceMinor:1075 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/918ff36b-662f-46ae-b71a-301df7e67735/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:295 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:587 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/24dfafc9-86a9-450e-ac62-a871138106c0/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:834 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1079 DeviceMajor:0 DeviceMinor:1079 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh DeviceMajor:0 DeviceMinor:711 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:806 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a5b3c1fb-6f81-4067-98da-681d6c7c33e4/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:1001 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/690d1f81-7b1f-4fd0-9b6e-154c9687c744/volumes/kubernetes.io~projected/kube-api-access-8wh8g DeviceMajor:0 DeviceMinor:1018 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:06a6bfff8d933d9 MacAddress:fa:ff:dd:28:6f:37 Speed:10000 Mtu:8900} {Name:0f818fbb8a023f8 MacAddress:de:fb:a6:52:2f:94 Speed:10000 Mtu:8900} {Name:12a33b618352d27 MacAddress:22:b2:f2:a3:0b:74 Speed:10000 Mtu:8900} {Name:317e7dd62fa100d MacAddress:3a:e7:77:b1:f4:ed Speed:10000 Mtu:8900} {Name:3c741f860cc22e9 MacAddress:82:f2:d9:30:f6:99 Speed:10000 Mtu:8900} {Name:44ddc337512cf47 MacAddress:d2:7a:f0:51:f9:6f Speed:10000 Mtu:8900} {Name:4568b0000197ea5 MacAddress:36:e1:be:99:a6:24 Speed:10000 Mtu:8900} {Name:4a83b648c669c68 MacAddress:2a:68:6b:12:bd:1d Speed:10000 Mtu:8900} {Name:4b9a6d5be513374 MacAddress:46:23:c0:1e:19:b6 Speed:10000 Mtu:8900} {Name:4dce4931560a1d1 MacAddress:46:7f:e5:1b:5d:6e Speed:10000 Mtu:8900} {Name:53b64d2d94429cb MacAddress:46:37:3d:ae:d7:ec Speed:10000 Mtu:8900} {Name:588f3f48138c2b3 MacAddress:9e:52:81:4b:3b:d9 Speed:10000 Mtu:8900} {Name:608e5faf1d1f7ff MacAddress:9e:8b:75:ab:26:b1 Speed:10000 Mtu:8900} {Name:6f32ab39360216f MacAddress:06:bf:8a:09:d6:03 Speed:10000 Mtu:8900} {Name:783c92b7dc341bf MacAddress:26:74:e4:65:e6:59 Speed:10000 Mtu:8900} {Name:78dfd31a88f925b MacAddress:3a:64:b6:ce:79:fc Speed:10000 Mtu:8900} {Name:79a4ce4fa1bb86b MacAddress:ae:a0:8c:1a:fd:bd Speed:10000 Mtu:8900} {Name:80e95cd74710420 MacAddress:06:be:e9:e5:e1:3b Speed:10000 Mtu:8900} {Name:8154ea1badaee93 MacAddress:c6:f8:8a:89:8e:72 Speed:10000 Mtu:8900} {Name:8bff50a8699bca9 MacAddress:b2:b8:2f:2d:77:34 Speed:10000 Mtu:8900} {Name:8e359aa49722552 MacAddress:86:9c:5d:00:24:b3 Speed:10000 Mtu:8900} {Name:8fcabcf0ace4fc4 MacAddress:72:48:fb:3f:14:60 Speed:10000 Mtu:8900} {Name:90d4314b3ecfe26 MacAddress:62:46:27:9b:97:c9 Speed:10000 Mtu:8900} {Name:9224545b3d2efd5 MacAddress:2a:37:7e:b7:2e:8a Speed:10000 Mtu:8900} {Name:938a08c4d1aea74 MacAddress:86:b3:e3:cf:40:73 Speed:10000 Mtu:8900} {Name:939b2c92bcdf7cd MacAddress:c2:07:35:81:6b:81 Speed:10000 Mtu:8900} {Name:956e8e5ddc763af MacAddress:16:e6:6a:a8:63:29 Speed:10000 Mtu:8900} {Name:99aab5d6addd41c MacAddress:92:03:cb:66:52:9b Speed:10000 Mtu:8900} {Name:9ccfb5a253e70da MacAddress:02:e1:78:47:35:0f Speed:10000 Mtu:8900} {Name:a4482da3b6269c3 MacAddress:fe:da:7e:45:e1:bf Speed:10000 Mtu:8900} {Name:b23778ca4e9ae4d MacAddress:4a:4a:4f:26:7d:0e Speed:10000 Mtu:8900} {Name:b27a5bf76da5bbf MacAddress:5e:c4:a9:05:dd:2f Speed:10000 Mtu:8900} {Name:b6df7eda176c098 MacAddress:de:2d:31:48:b0:03 Speed:10000 Mtu:8900} {Name:b8f77acffa4c354 MacAddress:32:d8:4e:e2:91:4b Speed:10000 Mtu:8900} {Name:baf8480d9e2390e MacAddress:6e:fd:51:ea:80:d3 Speed:10000 Mtu:8900} {Name:bf859b5a264e6e2 MacAddress:52:37:22:51:f8:a6 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:c0c28bf839b2e5e MacAddress:3a:b9:84:16:61:71 Speed:10000 Mtu:8900} {Name:ca1230f4b492fd1 MacAddress:ba:73:d5:9c:6f:f2 Speed:10000 Mtu:8900} {Name:cb7a3f7dec078f2 MacAddress:32:75:98:78:2a:43 Speed:10000 Mtu:8900} {Name:cf9eaca9ad61c4a MacAddress:0e:d9:30:d4:d4:3f Speed:10000 Mtu:8900} {Name:dce6001c167c840 MacAddress:9a:3c:73:02:56:c6 Speed:10000 Mtu:8900} {Name:dfbeecf9d9d8441 MacAddress:a6:82:cc:4b:d4:e2 Speed:10000 Mtu:8900} {Name:e3ebbbd0ee7ca51 MacAddress:5a:5b:80:08:f2:f1 Speed:10000 Mtu:8900} {Name:e90ae89b899f7c4 MacAddress:72:90:97:51:4f:2b Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:f1e6e9b5cc1123c MacAddress:1e:b9:3a:0a:0c:13 Speed:10000 Mtu:8900} {Name:f2641a7c5c46993 MacAddress:fe:1b:70:08:db:1e Speed:10000 Mtu:8900} {Name:f5231d5a4957175 MacAddress:62:b1:2e:8b:20:cf Speed:10000 Mtu:8900} {Name:fa6ec978459ecd0 MacAddress:86:67:b7:23:71:72 Speed:10000 Mtu:8900} {Name:fb37bb90bb43ad3 MacAddress:92:28:a1:88:42:40 Speed:10000 Mtu:8900} {Name:ff1b2a1ff915423 MacAddress:fe:8c:a6:5a:f8:54 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:9e:f4:18:ab:cf:b5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 13:58:29.724746 master-0 kubenswrapper[16176]: I1203 13:58:29.724273 16176 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 13:58:29.724746 master-0 kubenswrapper[16176]: I1203 13:58:29.724351 16176 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 13:58:29.724746 master-0 kubenswrapper[16176]: I1203 13:58:29.724685 16176 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 13:58:29.725076 master-0 kubenswrapper[16176]: I1203 13:58:29.724863 16176 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 13:58:29.725147 master-0 kubenswrapper[16176]: I1203 13:58:29.724897 16176 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 13:58:29.725196 master-0 kubenswrapper[16176]: I1203 13:58:29.725168 16176 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 13:58:29.725196 master-0 kubenswrapper[16176]: I1203 13:58:29.725182 16176 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 13:58:29.725196 master-0 kubenswrapper[16176]: I1203 13:58:29.725195 16176 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:58:29.725298 master-0 kubenswrapper[16176]: I1203 13:58:29.725227 16176 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 13:58:29.725298 master-0 kubenswrapper[16176]: I1203 13:58:29.725285 16176 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:58:29.725406 master-0 kubenswrapper[16176]: I1203 13:58:29.725378 16176 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 13:58:29.725521 master-0 kubenswrapper[16176]: I1203 13:58:29.725497 16176 kubelet.go:418] "Attempting to sync node with API server" Dec 03 13:58:29.725521 master-0 kubenswrapper[16176]: I1203 13:58:29.725517 16176 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 13:58:29.725614 master-0 kubenswrapper[16176]: I1203 13:58:29.725537 16176 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 13:58:29.725614 master-0 kubenswrapper[16176]: I1203 13:58:29.725552 16176 kubelet.go:324] "Adding apiserver pod source" Dec 03 13:58:29.725672 master-0 kubenswrapper[16176]: I1203 13:58:29.725643 16176 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 13:58:29.727335 master-0 kubenswrapper[16176]: W1203 13:58:29.727232 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:29.727458 master-0 kubenswrapper[16176]: E1203 13:58:29.727415 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:29.727689 master-0 kubenswrapper[16176]: I1203 13:58:29.727647 16176 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 13:58:29.727733 master-0 kubenswrapper[16176]: W1203 13:58:29.727695 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:29.727773 master-0 kubenswrapper[16176]: E1203 13:58:29.727751 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:29.727874 master-0 kubenswrapper[16176]: I1203 13:58:29.727848 16176 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 13:58:29.728201 master-0 kubenswrapper[16176]: I1203 13:58:29.728174 16176 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 13:58:29.728386 master-0 kubenswrapper[16176]: I1203 13:58:29.728362 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728389 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728399 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728408 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728415 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728423 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728430 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 13:58:29.728430 master-0 kubenswrapper[16176]: I1203 13:58:29.728438 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 13:58:29.728663 master-0 kubenswrapper[16176]: I1203 13:58:29.728451 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 13:58:29.728663 master-0 kubenswrapper[16176]: I1203 13:58:29.728459 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 13:58:29.728663 master-0 kubenswrapper[16176]: I1203 13:58:29.728501 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 13:58:29.728663 master-0 kubenswrapper[16176]: I1203 13:58:29.728517 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 13:58:29.728663 master-0 kubenswrapper[16176]: I1203 13:58:29.728616 16176 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 13:58:29.729127 master-0 kubenswrapper[16176]: I1203 13:58:29.729098 16176 server.go:1280] "Started kubelet" Dec 03 13:58:29.730727 master-0 kubenswrapper[16176]: I1203 13:58:29.730585 16176 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:29.730789 master-0 kubenswrapper[16176]: I1203 13:58:29.730659 16176 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 13:58:29.730841 master-0 kubenswrapper[16176]: I1203 13:58:29.730685 16176 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 13:58:29.730899 master-0 kubenswrapper[16176]: I1203 13:58:29.730865 16176 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 13:58:29.731057 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 13:58:29.738147 master-0 kubenswrapper[16176]: I1203 13:58:29.738093 16176 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 13:58:29.738586 master-0 kubenswrapper[16176]: E1203 13:58:29.738297 16176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db93f1d8e6768 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:58:29.72905252 +0000 UTC m=+0.154693182,LastTimestamp:2025-12-03 13:58:29.72905252 +0000 UTC m=+0.154693182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:58:29.739797 master-0 kubenswrapper[16176]: I1203 13:58:29.739696 16176 server.go:449] "Adding debug handlers to kubelet server" Dec 03 13:58:29.749547 master-0 kubenswrapper[16176]: E1203 13:58:29.749487 16176 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 13:58:29.750338 master-0 kubenswrapper[16176]: I1203 13:58:29.750294 16176 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 13:58:29.750388 master-0 kubenswrapper[16176]: I1203 13:58:29.750357 16176 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 13:58:29.750669 master-0 kubenswrapper[16176]: I1203 13:58:29.750569 16176 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 09:04:16.536923492 +0000 UTC Dec 03 13:58:29.750669 master-0 kubenswrapper[16176]: I1203 13:58:29.750658 16176 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h5m46.786268404s for next certificate rotation Dec 03 13:58:29.750880 master-0 kubenswrapper[16176]: E1203 13:58:29.750821 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:29.750975 master-0 kubenswrapper[16176]: I1203 13:58:29.750955 16176 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 13:58:29.751032 master-0 kubenswrapper[16176]: I1203 13:58:29.751022 16176 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 13:58:29.751233 master-0 kubenswrapper[16176]: I1203 13:58:29.751219 16176 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 13:58:29.751370 master-0 kubenswrapper[16176]: I1203 13:58:29.751350 16176 factory.go:55] Registering systemd factory Dec 03 13:58:29.751423 master-0 kubenswrapper[16176]: I1203 13:58:29.751383 16176 factory.go:221] Registration of the systemd container factory successfully Dec 03 13:58:29.751696 master-0 kubenswrapper[16176]: I1203 13:58:29.751670 16176 factory.go:153] Registering CRI-O factory Dec 03 13:58:29.751744 master-0 kubenswrapper[16176]: I1203 13:58:29.751701 16176 factory.go:221] Registration of the crio container factory successfully Dec 03 13:58:29.751744 master-0 kubenswrapper[16176]: E1203 13:58:29.751709 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 13:58:29.751814 master-0 kubenswrapper[16176]: I1203 13:58:29.751794 16176 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 13:58:29.751851 master-0 kubenswrapper[16176]: I1203 13:58:29.751840 16176 factory.go:103] Registering Raw factory Dec 03 13:58:29.751898 master-0 kubenswrapper[16176]: I1203 13:58:29.751873 16176 manager.go:1196] Started watching for new ooms in manager Dec 03 13:58:29.752487 master-0 kubenswrapper[16176]: I1203 13:58:29.752469 16176 manager.go:319] Starting recovery of all containers Dec 03 13:58:29.754658 master-0 kubenswrapper[16176]: W1203 13:58:29.754539 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:29.754733 master-0 kubenswrapper[16176]: E1203 13:58:29.754680 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:29.768080 master-0 kubenswrapper[16176]: I1203 13:58:29.767965 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 13:58:29.768080 master-0 kubenswrapper[16176]: I1203 13:58:29.768060 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config" seLinuxMountContext="" Dec 03 13:58:29.768080 master-0 kubenswrapper[16176]: I1203 13:58:29.768083 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768080 master-0 kubenswrapper[16176]: I1203 13:58:29.768102 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768119 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="486d4964-18cc-4adc-b82d-b09627cadda4" volumeName="kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768137 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69f41c3e-713b-4532-8534-ceefb7f519bf" volumeName="kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768153 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768172 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768193 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768209 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768228 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768244 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768298 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768328 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768346 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768362 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f723d97-5c65-4ae7-9085-26db8b4f2f52" volumeName="kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768378 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768404 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768426 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768448 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768471 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp" seLinuxMountContext="" Dec 03 13:58:29.768486 master-0 kubenswrapper[16176]: I1203 13:58:29.768494 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768516 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42c95e54-b4ba-4b19-a97c-abcec840ac5d" volumeName="kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768538 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768560 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768581 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" volumeName="kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768605 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768631 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768656 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768679 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768701 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768737 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768759 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768784 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768801 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768818 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85820c13-e5cf-4af1-bd1c-dd74ea151cac" volumeName="kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768835 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03494fce-881e-4eb6-bc3d-570f1d8e7c52" volumeName="kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768852 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768871 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768887 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768907 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca" seLinuxMountContext="" Dec 03 13:58:29.768947 master-0 kubenswrapper[16176]: I1203 13:58:29.768939 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.768974 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.768996 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769016 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769032 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769051 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769068 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769087 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769107 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769125 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769141 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769166 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03494fce-881e-4eb6-bc3d-570f1d8e7c52" volumeName="kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769186 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769204 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769224 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769244 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769299 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1efcc24c-87bf-48cd-83b5-196c661a2517" volumeName="kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769318 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85820c13-e5cf-4af1-bd1c-dd74ea151cac" volumeName="kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769338 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769356 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769372 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769389 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769407 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769424 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769445 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769461 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="486d4964-18cc-4adc-b82d-b09627cadda4" volumeName="kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769480 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769496 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769512 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2" seLinuxMountContext="" Dec 03 13:58:29.769510 master-0 kubenswrapper[16176]: I1203 13:58:29.769533 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecc68b17-9112-471d-89f9-15bf30dfa004" volumeName="kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769552 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769572 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769632 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dcaccc5-46b1-4a38-b3af-6839dec529d3" volumeName="kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769660 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769683 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769710 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769733 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769754 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769776 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769830 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769858 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5f23b6d-8303-46d8-892e-8e2c01b567b5" volumeName="kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769891 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769917 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769945 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.769975 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770006 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770036 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770068 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770100 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770130 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770159 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770187 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770218 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770250 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770343 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.770375 master-0 kubenswrapper[16176]: I1203 13:58:29.770378 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770418 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770448 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770492 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770522 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770610 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85820c13-e5cf-4af1-bd1c-dd74ea151cac" volumeName="kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770647 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770677 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770731 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770766 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770796 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770828 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770859 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770889 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770921 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.770953 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" volumeName="kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771003 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771036 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771066 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771097 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecc68b17-9112-471d-89f9-15bf30dfa004" volumeName="kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771124 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771150 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771178 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771205 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 13:58:29.771207 master-0 kubenswrapper[16176]: I1203 13:58:29.771233 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5f23b6d-8303-46d8-892e-8e2c01b567b5" volumeName="kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771296 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5f23b6d-8303-46d8-892e-8e2c01b567b5" volumeName="kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771329 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="486d4964-18cc-4adc-b82d-b09627cadda4" volumeName="kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771356 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771381 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771411 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecc68b17-9112-471d-89f9-15bf30dfa004" volumeName="kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771436 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771463 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771490 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771520 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85820c13-e5cf-4af1-bd1c-dd74ea151cac" volumeName="kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771546 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69f41c3e-713b-4532-8534-ceefb7f519bf" volumeName="kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771574 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771605 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771635 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771663 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771705 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771735 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1efcc24c-87bf-48cd-83b5-196c661a2517" volumeName="kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771770 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771799 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771828 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771857 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771888 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771922 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771952 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03494fce-881e-4eb6-bc3d-570f1d8e7c52" volumeName="kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.771980 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.772009 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx" seLinuxMountContext="" Dec 03 13:58:29.772021 master-0 kubenswrapper[16176]: I1203 13:58:29.772040 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772072 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772103 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772132 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1efcc24c-87bf-48cd-83b5-196c661a2517" volumeName="kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772161 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772193 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772227 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772296 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772334 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772383 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772413 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772445 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772477 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772509 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772539 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772576 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772604 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772631 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772657 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772692 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772719 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4" seLinuxMountContext="" Dec 03 13:58:29.772745 master-0 kubenswrapper[16176]: I1203 13:58:29.772749 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50f28c77-b15c-4b86-93c8-221c0cc82bb2" volumeName="kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772777 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772805 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772835 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772861 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69f41c3e-713b-4532-8534-ceefb7f519bf" volumeName="kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772889 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772919 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772946 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ecc68b17-9112-471d-89f9-15bf30dfa004" volumeName="kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.772974 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773003 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773045 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773075 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773104 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773134 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773161 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773189 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 13:58:29.773359 master-0 kubenswrapper[16176]: I1203 13:58:29.773222 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773253 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773490 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773586 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773633 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773662 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773692 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773719 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5f23b6d-8303-46d8-892e-8e2c01b567b5" volumeName="kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.773772 master-0 kubenswrapper[16176]: I1203 13:58:29.773748 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773775 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773814 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69f41c3e-713b-4532-8534-ceefb7f519bf" volumeName="kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773873 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773901 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f5f23b6d-8303-46d8-892e-8e2c01b567b5" volumeName="kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773928 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773955 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.773983 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 13:58:29.774012 master-0 kubenswrapper[16176]: I1203 13:58:29.774010 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 13:58:29.774224 master-0 kubenswrapper[16176]: I1203 13:58:29.774041 16176 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs" seLinuxMountContext="" Dec 03 13:58:29.774406 master-0 kubenswrapper[16176]: I1203 13:58:29.774253 16176 reconstruct.go:97] "Volume reconstruction finished" Dec 03 13:58:29.774446 master-0 kubenswrapper[16176]: I1203 13:58:29.774423 16176 reconciler.go:26] "Reconciler: start to sync state" Dec 03 13:58:29.789536 master-0 kubenswrapper[16176]: I1203 13:58:29.789417 16176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 13:58:29.791724 master-0 kubenswrapper[16176]: I1203 13:58:29.791685 16176 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 13:58:29.791791 master-0 kubenswrapper[16176]: I1203 13:58:29.791764 16176 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 13:58:29.791862 master-0 kubenswrapper[16176]: I1203 13:58:29.791806 16176 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 13:58:29.791905 master-0 kubenswrapper[16176]: E1203 13:58:29.791872 16176 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 13:58:29.792949 master-0 kubenswrapper[16176]: W1203 13:58:29.792886 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:29.793024 master-0 kubenswrapper[16176]: E1203 13:58:29.792950 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:29.817058 master-0 kubenswrapper[16176]: I1203 13:58:29.816953 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/0.log" Dec 03 13:58:29.817738 master-0 kubenswrapper[16176]: I1203 13:58:29.817673 16176 generic.go:334] "Generic (PLEG): container finished" podID="da583723-b3ad-4a6f-b586-09b739bd7f8c" containerID="55c650b6735d1149a2afda93b8298292e086e4e3f1a7fa967236b4dd8824447e" exitCode=1 Dec 03 13:58:29.826339 master-0 kubenswrapper[16176]: I1203 13:58:29.826237 16176 generic.go:334] "Generic (PLEG): container finished" podID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerID="12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539" exitCode=0 Dec 03 13:58:29.829687 master-0 kubenswrapper[16176]: I1203 13:58:29.829631 16176 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="af97d05966a6b8b6492c95f3ae8bbb3e7b5394709c3c830a0152652cc4e1899b" exitCode=0 Dec 03 13:58:29.829687 master-0 kubenswrapper[16176]: I1203 13:58:29.829683 16176 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="2c07c96ce111810f4abda326bee63148f01fbb43604144637921c7eaf553e422" exitCode=0 Dec 03 13:58:29.834526 master-0 kubenswrapper[16176]: I1203 13:58:29.834405 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a71ca428dfafcfdc36c094ec10a4f26a0955b62eee12c5643b197e7b67fda68a" exitCode=0 Dec 03 13:58:29.834526 master-0 kubenswrapper[16176]: I1203 13:58:29.834525 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="f9b2a45b3882aa4aab7d621861c3b576125dca392eda394a42bdbf272c5861e2" exitCode=0 Dec 03 13:58:29.834808 master-0 kubenswrapper[16176]: I1203 13:58:29.834544 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="26079c56109d1215373542cb7279aa79197f8d276b87f23f84c5d431dd38bc3f" exitCode=0 Dec 03 13:58:29.834808 master-0 kubenswrapper[16176]: I1203 13:58:29.834560 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="54eb7436f6ac8799b7f10cde49a492e33995d42df0890008db66fbf955cc9e20" exitCode=0 Dec 03 13:58:29.834808 master-0 kubenswrapper[16176]: I1203 13:58:29.834574 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="a6cef233e6c629ac6fba57da009a22816a29742255beeb15a48e7b7b48c9e536" exitCode=0 Dec 03 13:58:29.834808 master-0 kubenswrapper[16176]: I1203 13:58:29.834581 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="8508b9103a62149e40a9f8b253309fee2580cb816ac86bfe2d7376f7c71e976c" exitCode=0 Dec 03 13:58:29.837136 master-0 kubenswrapper[16176]: I1203 13:58:29.837057 16176 generic.go:334] "Generic (PLEG): container finished" podID="1c562495-1290-4792-b4b2-639faa594ae2" containerID="f767adcff9a0e233cd5a0d89a9f43dff3fc735aa20c23293aa5dcee5ce476e89" exitCode=0 Dec 03 13:58:29.841430 master-0 kubenswrapper[16176]: I1203 13:58:29.841305 16176 generic.go:334] "Generic (PLEG): container finished" podID="06d774e5-314a-49df-bdca-8e780c9af25a" containerID="27c1a40f3c3bc0e48435031dbfc32e5c0ade7b6afed6f0f6f463c37953bf90b2" exitCode=0 Dec 03 13:58:29.851913 master-0 kubenswrapper[16176]: E1203 13:58:29.851845 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:29.861029 master-0 kubenswrapper[16176]: I1203 13:58:29.860979 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="c92c50a11c2a662e5059d5ecc58bf830b95d8aca43091af67255e096313ccb46" exitCode=0 Dec 03 13:58:29.863372 master-0 kubenswrapper[16176]: I1203 13:58:29.863335 16176 generic.go:334] "Generic (PLEG): container finished" podID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerID="8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e" exitCode=0 Dec 03 13:58:29.865972 master-0 kubenswrapper[16176]: I1203 13:58:29.865945 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-kk4tm_c777c9de-1ace-46be-b5c2-c71d252f53f4/kube-multus/0.log" Dec 03 13:58:29.866078 master-0 kubenswrapper[16176]: I1203 13:58:29.865984 16176 generic.go:334] "Generic (PLEG): container finished" podID="c777c9de-1ace-46be-b5c2-c71d252f53f4" containerID="eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e" exitCode=1 Dec 03 13:58:29.868570 master-0 kubenswrapper[16176]: I1203 13:58:29.868522 16176 generic.go:334] "Generic (PLEG): container finished" podID="9afa5e14-6832-4650-9401-97359c445e61" containerID="47a8ddfc7f7b71da4bd36254308448e4c5ee29fcc63f3b852aed944db5125062" exitCode=0 Dec 03 13:58:29.873296 master-0 kubenswrapper[16176]: I1203 13:58:29.873239 16176 generic.go:334] "Generic (PLEG): container finished" podID="c95705e3-17ef-40fe-89e8-22586a32621b" containerID="c5498229c064870000ea3daf72432927db1bd1e50fb18b1e394aaea41976762e" exitCode=0 Dec 03 13:58:29.875255 master-0 kubenswrapper[16176]: I1203 13:58:29.875194 16176 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="0a533903a7af56a0f95d40262b0fc66ff75c086f5871d9886f30269d0ad24011" exitCode=0 Dec 03 13:58:29.879931 master-0 kubenswrapper[16176]: I1203 13:58:29.879892 16176 generic.go:334] "Generic (PLEG): container finished" podID="918ff36b-662f-46ae-b71a-301df7e67735" containerID="260c925573f93c0439722d8810ce6c195e1dc2d279cb295c92ace13d1222474e" exitCode=0 Dec 03 13:58:29.892017 master-0 kubenswrapper[16176]: E1203 13:58:29.891952 16176 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 13:58:29.903225 master-0 kubenswrapper[16176]: I1203 13:58:29.903138 16176 generic.go:334] "Generic (PLEG): container finished" podID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" containerID="efe5b98b8193b6c315bd2fdafc1dfa799f114179992474177c6e7d697c70abb2" exitCode=0 Dec 03 13:58:29.920706 master-0 kubenswrapper[16176]: I1203 13:58:29.920658 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-78ddcf56f9-8l84w_63aae3b9-9a72-497e-af01-5d8b8d0ac876/multus-admission-controller/0.log" Dec 03 13:58:29.920823 master-0 kubenswrapper[16176]: I1203 13:58:29.920717 16176 generic.go:334] "Generic (PLEG): container finished" podID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerID="4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc" exitCode=0 Dec 03 13:58:29.920823 master-0 kubenswrapper[16176]: I1203 13:58:29.920747 16176 generic.go:334] "Generic (PLEG): container finished" podID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerID="bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881" exitCode=137 Dec 03 13:58:29.922911 master-0 kubenswrapper[16176]: I1203 13:58:29.922866 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_ee6150c4-22d1-465b-a934-74d5e197d646/installer/0.log" Dec 03 13:58:29.922995 master-0 kubenswrapper[16176]: I1203 13:58:29.922922 16176 generic.go:334] "Generic (PLEG): container finished" podID="ee6150c4-22d1-465b-a934-74d5e197d646" containerID="9bec250a37c6fd420e6a68fa34a40e8bf74f0c10fd29a6d0f7605bcfd065e230" exitCode=1 Dec 03 13:58:29.924905 master-0 kubenswrapper[16176]: I1203 13:58:29.924850 16176 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c" exitCode=2 Dec 03 13:58:29.927297 master-0 kubenswrapper[16176]: I1203 13:58:29.927190 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_9477082b-005c-4ff5-812a-7c3230f60da2/installer/0.log" Dec 03 13:58:29.927297 master-0 kubenswrapper[16176]: I1203 13:58:29.927228 16176 generic.go:334] "Generic (PLEG): container finished" podID="9477082b-005c-4ff5-812a-7c3230f60da2" containerID="329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee" exitCode=1 Dec 03 13:58:29.935124 master-0 kubenswrapper[16176]: I1203 13:58:29.935058 16176 generic.go:334] "Generic (PLEG): container finished" podID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" containerID="97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c" exitCode=0 Dec 03 13:58:29.943171 master-0 kubenswrapper[16176]: I1203 13:58:29.943096 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5f574c6c79-86bh9_5aa67ace-d03a-4d06-9fb5-24777b65f2cc/kube-scheduler-operator-container/1.log" Dec 03 13:58:29.943171 master-0 kubenswrapper[16176]: I1203 13:58:29.943180 16176 generic.go:334] "Generic (PLEG): container finished" podID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" containerID="b788c260e42bc255de0b95312c50570d58e845a67dde6dd29f1d5ede6f50b760" exitCode=255 Dec 03 13:58:29.946429 master-0 kubenswrapper[16176]: I1203 13:58:29.946350 16176 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1" exitCode=0 Dec 03 13:58:29.946429 master-0 kubenswrapper[16176]: I1203 13:58:29.946398 16176 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3" exitCode=0 Dec 03 13:58:29.952172 master-0 kubenswrapper[16176]: E1203 13:58:29.951937 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:29.952172 master-0 kubenswrapper[16176]: I1203 13:58:29.952145 16176 generic.go:334] "Generic (PLEG): container finished" podID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerID="c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541" exitCode=0 Dec 03 13:58:29.959516 master-0 kubenswrapper[16176]: E1203 13:58:29.959418 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 13:58:29.962337 master-0 kubenswrapper[16176]: I1203 13:58:29.962231 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 13:58:29.963157 master-0 kubenswrapper[16176]: I1203 13:58:29.963065 16176 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b" exitCode=1 Dec 03 13:58:29.963157 master-0 kubenswrapper[16176]: I1203 13:58:29.963124 16176 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c" exitCode=0 Dec 03 13:58:29.968321 master-0 kubenswrapper[16176]: I1203 13:58:29.968239 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/0.log" Dec 03 13:58:29.968321 master-0 kubenswrapper[16176]: I1203 13:58:29.968316 16176 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="ecdb30fdbb4d4e7e6a5ab2a8c0c78dc966b6766d4fc8dacd3b90e5acf0728097" exitCode=1 Dec 03 13:58:29.978241 master-0 kubenswrapper[16176]: I1203 13:58:29.978176 16176 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="e05f3c8b427af65164aa63c5861d13e4cd4cc04110fb6fdb74286266751163bc" exitCode=0 Dec 03 13:58:29.985163 master-0 kubenswrapper[16176]: I1203 13:58:29.985115 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/0.log" Dec 03 13:58:29.985308 master-0 kubenswrapper[16176]: I1203 13:58:29.985175 16176 generic.go:334] "Generic (PLEG): container finished" podID="adbcce01-7282-4a75-843a-9623060346f0" containerID="9d7457bb900844a16e5e3a7cfd4664192d8040e5785b96d2e474f9f0d185dccc" exitCode=1 Dec 03 13:58:29.990551 master-0 kubenswrapper[16176]: I1203 13:58:29.990497 16176 generic.go:334] "Generic (PLEG): container finished" podID="0535e784-8e28-4090-aa2e-df937910767c" containerID="70dfdf1d245b899ffd4f89819f8560cdba94451d4d92e6018d477dc269e6ea12" exitCode=0 Dec 03 13:58:29.996244 master-0 kubenswrapper[16176]: I1203 13:58:29.996171 16176 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c" exitCode=1 Dec 03 13:58:29.998804 master-0 kubenswrapper[16176]: I1203 13:58:29.998757 16176 generic.go:334] "Generic (PLEG): container finished" podID="52100521-67e9-40c9-887c-eda6560f06e0" containerID="62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae" exitCode=0 Dec 03 13:58:30.003290 master-0 kubenswrapper[16176]: I1203 13:58:30.003233 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-85dbd94574-8jfp5_bcc78129-4a81-410e-9a42-b12043b5a75a/ingress-operator/0.log" Dec 03 13:58:30.003422 master-0 kubenswrapper[16176]: I1203 13:58:30.003294 16176 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91" exitCode=1 Dec 03 13:58:30.008497 master-0 kubenswrapper[16176]: I1203 13:58:30.008435 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92" exitCode=0 Dec 03 13:58:30.008497 master-0 kubenswrapper[16176]: I1203 13:58:30.008491 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3" exitCode=0 Dec 03 13:58:30.008601 master-0 kubenswrapper[16176]: I1203 13:58:30.008503 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589" exitCode=0 Dec 03 13:58:30.013603 master-0 kubenswrapper[16176]: I1203 13:58:30.013550 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-57fd58bc7b-kktql_24dfafc9-86a9-450e-ac62-a871138106c0/oauth-apiserver/0.log" Dec 03 13:58:30.014201 master-0 kubenswrapper[16176]: I1203 13:58:30.014155 16176 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092" exitCode=1 Dec 03 13:58:30.014201 master-0 kubenswrapper[16176]: I1203 13:58:30.014196 16176 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="f40d880c3949fa39a1d71100e4b83bfcc1b96c7301f0caf0577fed05fde9d024" exitCode=0 Dec 03 13:58:30.027196 master-0 kubenswrapper[16176]: I1203 13:58:30.027072 16176 generic.go:334] "Generic (PLEG): container finished" podID="b051ae27-7879-448d-b426-4dce76e29739" containerID="4edfa8a89bc0d5038266241047b9c2dea2c14e6566f232726960cf6811e895c0" exitCode=0 Dec 03 13:58:30.031295 master-0 kubenswrapper[16176]: I1203 13:58:30.031231 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_725fa88d-f29d-4dee-bfba-6e1c4506f73c/installer/0.log" Dec 03 13:58:30.031404 master-0 kubenswrapper[16176]: I1203 13:58:30.031301 16176 generic.go:334] "Generic (PLEG): container finished" podID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerID="6f689de8a1f834cf175e7a94d531ac0bc5fbc598832080a29d70489ce59fa461" exitCode=1 Dec 03 13:58:30.039904 master-0 kubenswrapper[16176]: I1203 13:58:30.039831 16176 generic.go:334] "Generic (PLEG): container finished" podID="e97e1725-cb55-4ce3-952d-a4fd0731577d" containerID="338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237" exitCode=0 Dec 03 13:58:30.052307 master-0 kubenswrapper[16176]: E1203 13:58:30.052237 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.093119 master-0 kubenswrapper[16176]: E1203 13:58:30.092963 16176 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 13:58:30.152465 master-0 kubenswrapper[16176]: E1203 13:58:30.152372 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.253045 master-0 kubenswrapper[16176]: E1203 13:58:30.252890 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.354197 master-0 kubenswrapper[16176]: E1203 13:58:30.354008 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.362303 master-0 kubenswrapper[16176]: E1203 13:58:30.362182 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 13:58:30.455281 master-0 kubenswrapper[16176]: E1203 13:58:30.455178 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.494016 master-0 kubenswrapper[16176]: E1203 13:58:30.493924 16176 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 13:58:30.555440 master-0 kubenswrapper[16176]: E1203 13:58:30.555352 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.656559 master-0 kubenswrapper[16176]: E1203 13:58:30.656314 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.733296 master-0 kubenswrapper[16176]: I1203 13:58:30.732984 16176 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:30.759751 master-0 kubenswrapper[16176]: E1203 13:58:30.758404 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.825548 master-0 kubenswrapper[16176]: W1203 13:58:30.825437 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:30.825548 master-0 kubenswrapper[16176]: E1203 13:58:30.825548 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:30.858613 master-0 kubenswrapper[16176]: E1203 13:58:30.858550 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.908520 master-0 kubenswrapper[16176]: W1203 13:58:30.908238 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:30.908520 master-0 kubenswrapper[16176]: E1203 13:58:30.908437 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:30.948623 master-0 kubenswrapper[16176]: W1203 13:58:30.948471 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:30.948911 master-0 kubenswrapper[16176]: E1203 13:58:30.948629 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:30.958965 master-0 kubenswrapper[16176]: E1203 13:58:30.958912 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:30.963778 master-0 kubenswrapper[16176]: I1203 13:58:30.963714 16176 manager.go:324] Recovery completed Dec 03 13:58:31.059124 master-0 kubenswrapper[16176]: E1203 13:58:31.059045 16176 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 13:58:31.060133 master-0 kubenswrapper[16176]: I1203 13:58:31.060082 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.063586 master-0 kubenswrapper[16176]: I1203 13:58:31.063551 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.063769 master-0 kubenswrapper[16176]: I1203 13:58:31.063754 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.063869 master-0 kubenswrapper[16176]: I1203 13:58:31.063854 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.068712 master-0 kubenswrapper[16176]: I1203 13:58:31.068650 16176 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 13:58:31.069041 master-0 kubenswrapper[16176]: I1203 13:58:31.069022 16176 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 13:58:31.069157 master-0 kubenswrapper[16176]: I1203 13:58:31.069144 16176 state_mem.go:36] "Initialized new in-memory state store" Dec 03 13:58:31.069557 master-0 kubenswrapper[16176]: I1203 13:58:31.069524 16176 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 03 13:58:31.069695 master-0 kubenswrapper[16176]: I1203 13:58:31.069656 16176 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 03 13:58:31.069778 master-0 kubenswrapper[16176]: I1203 13:58:31.069767 16176 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Dec 03 13:58:31.069839 master-0 kubenswrapper[16176]: I1203 13:58:31.069829 16176 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Dec 03 13:58:31.069896 master-0 kubenswrapper[16176]: I1203 13:58:31.069887 16176 policy_none.go:49] "None policy: Start" Dec 03 13:58:31.075412 master-0 kubenswrapper[16176]: I1203 13:58:31.074347 16176 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 13:58:31.075412 master-0 kubenswrapper[16176]: I1203 13:58:31.074463 16176 state_mem.go:35] "Initializing new in-memory state store" Dec 03 13:58:31.075412 master-0 kubenswrapper[16176]: I1203 13:58:31.074769 16176 state_mem.go:75] "Updated machine memory state" Dec 03 13:58:31.075412 master-0 kubenswrapper[16176]: I1203 13:58:31.074781 16176 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Dec 03 13:58:31.152863 master-0 kubenswrapper[16176]: I1203 13:58:31.152796 16176 manager.go:334] "Starting Device Plugin manager" Dec 03 13:58:31.153227 master-0 kubenswrapper[16176]: I1203 13:58:31.153021 16176 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 13:58:31.153227 master-0 kubenswrapper[16176]: I1203 13:58:31.153074 16176 server.go:79] "Starting device plugin registration server" Dec 03 13:58:31.156176 master-0 kubenswrapper[16176]: I1203 13:58:31.153633 16176 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 13:58:31.156176 master-0 kubenswrapper[16176]: I1203 13:58:31.153654 16176 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 13:58:31.156176 master-0 kubenswrapper[16176]: I1203 13:58:31.154019 16176 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 13:58:31.156176 master-0 kubenswrapper[16176]: I1203 13:58:31.154235 16176 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 13:58:31.156176 master-0 kubenswrapper[16176]: I1203 13:58:31.154247 16176 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 13:58:31.161311 master-0 kubenswrapper[16176]: E1203 13:58:31.161207 16176 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 13:58:31.163765 master-0 kubenswrapper[16176]: E1203 13:58:31.163430 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 13:58:31.254444 master-0 kubenswrapper[16176]: I1203 13:58:31.254344 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.258898 master-0 kubenswrapper[16176]: I1203 13:58:31.258840 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.259029 master-0 kubenswrapper[16176]: I1203 13:58:31.258915 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.259029 master-0 kubenswrapper[16176]: I1203 13:58:31.258938 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.259029 master-0 kubenswrapper[16176]: I1203 13:58:31.258973 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:31.260534 master-0 kubenswrapper[16176]: E1203 13:58:31.260489 16176 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:58:31.281303 master-0 kubenswrapper[16176]: W1203 13:58:31.281151 16176 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:31.281303 master-0 kubenswrapper[16176]: E1203 13:58:31.281287 16176 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 13:58:31.294823 master-0 kubenswrapper[16176]: I1203 13:58:31.294686 16176 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 13:58:31.295010 master-0 kubenswrapper[16176]: I1203 13:58:31.294882 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.297962 master-0 kubenswrapper[16176]: I1203 13:58:31.297892 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.298069 master-0 kubenswrapper[16176]: I1203 13:58:31.297974 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.298069 master-0 kubenswrapper[16176]: I1203 13:58:31.297993 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.298197 master-0 kubenswrapper[16176]: I1203 13:58:31.298169 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.298567 master-0 kubenswrapper[16176]: I1203 13:58:31.298497 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.302454 master-0 kubenswrapper[16176]: I1203 13:58:31.302415 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.302454 master-0 kubenswrapper[16176]: I1203 13:58:31.302454 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.302596 master-0 kubenswrapper[16176]: I1203 13:58:31.302466 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.302596 master-0 kubenswrapper[16176]: I1203 13:58:31.302419 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.302596 master-0 kubenswrapper[16176]: I1203 13:58:31.302504 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.302596 master-0 kubenswrapper[16176]: I1203 13:58:31.302518 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.302596 master-0 kubenswrapper[16176]: I1203 13:58:31.302574 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.302874 master-0 kubenswrapper[16176]: I1203 13:58:31.302835 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.305640 master-0 kubenswrapper[16176]: I1203 13:58:31.305600 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.305640 master-0 kubenswrapper[16176]: I1203 13:58:31.305633 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.305640 master-0 kubenswrapper[16176]: I1203 13:58:31.305644 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.305963 master-0 kubenswrapper[16176]: I1203 13:58:31.305832 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.305963 master-0 kubenswrapper[16176]: I1203 13:58:31.305935 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.305963 master-0 kubenswrapper[16176]: I1203 13:58:31.305948 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.306362 master-0 kubenswrapper[16176]: I1203 13:58:31.305976 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.306696 master-0 kubenswrapper[16176]: I1203 13:58:31.306559 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.308893 master-0 kubenswrapper[16176]: I1203 13:58:31.308835 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.308893 master-0 kubenswrapper[16176]: I1203 13:58:31.308897 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.309061 master-0 kubenswrapper[16176]: I1203 13:58:31.308913 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.310991 master-0 kubenswrapper[16176]: I1203 13:58:31.310937 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.310991 master-0 kubenswrapper[16176]: I1203 13:58:31.310993 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.311136 master-0 kubenswrapper[16176]: I1203 13:58:31.311007 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.311296 master-0 kubenswrapper[16176]: I1203 13:58:31.311259 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.311552 master-0 kubenswrapper[16176]: I1203 13:58:31.311500 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.316462 master-0 kubenswrapper[16176]: I1203 13:58:31.316074 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.316462 master-0 kubenswrapper[16176]: I1203 13:58:31.316205 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.316462 master-0 kubenswrapper[16176]: I1203 13:58:31.316226 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.316726 master-0 kubenswrapper[16176]: I1203 13:58:31.316651 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.317432 master-0 kubenswrapper[16176]: I1203 13:58:31.317316 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.317560 master-0 kubenswrapper[16176]: I1203 13:58:31.317441 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.320324 master-0 kubenswrapper[16176]: I1203 13:58:31.320235 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.320324 master-0 kubenswrapper[16176]: I1203 13:58:31.320310 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.320324 master-0 kubenswrapper[16176]: I1203 13:58:31.320320 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.321644 master-0 kubenswrapper[16176]: I1203 13:58:31.321410 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.321644 master-0 kubenswrapper[16176]: I1203 13:58:31.321452 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.321644 master-0 kubenswrapper[16176]: I1203 13:58:31.321464 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.321644 master-0 kubenswrapper[16176]: I1203 13:58:31.321614 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb4e2d334a547fbeaaea1fa9c53c41549464da1350be876ed579d7818ec2701" Dec 03 13:58:31.321808 master-0 kubenswrapper[16176]: I1203 13:58:31.321776 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.321835 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.321920 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6721d25c3e1746543af1f9a5ba41f4231e36342be8a55d2319256cbd81592116" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.321942 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6792cad26eed07d3c74e4fc383ff88889a4e3b75ff7eade1202c14c219e4ab" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322001 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d923e2294dc5bd349ef1897a915245d9a43be1c9d681ac05585e4028bf44c392" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322011 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e5841f6f6d8362456d4cf786f11e54bc8b9d3300e0bfe95ffe518785f2d7ae" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322022 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322086 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322098 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322107 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"69fef65eed7a231fbc328ce757f033f41c2df5c982f607a1ed94eaeac79b4677"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322116 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02519d59f3bf03db2c1be4cd1e6b9323786e664243c04d172b397e2871ca74ad" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322128 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f4905a6fcf90ab360dfd35e8f3dd368d9bbfe1e7af447586cbad3d03e4dc305" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322154 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214cfb90bd01e2a38cc00a74c6415843347ddcbd2c5b20e0758acbc9d4f19c58" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322166 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd671840d59b133f88fb03765cdb68615a01b375fa5cbcc45c53662d0aad8d5" Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322172 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322182 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"953d4fa370a237b9436aa5943e3ed1d6266452ea81ddd19342d326f67d86137b"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322191 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"f11f456465909ff00f1d06f575bfec968f3ce6fd228257ccb54e28331ef9f75c"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322202 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"4f513e922063b39de8633935c977aade894111215b6c0312a180ddacc009565d"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322245 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322285 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322302 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322335 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322349 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322360 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322371 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322383 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322393 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322439 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322452 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322462 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"2d9b8c691d5f3ee7b94a063c9932a9e9584dbd2cc766bb12c9c9139903e78355"} Dec 03 13:58:31.322525 master-0 kubenswrapper[16176]: I1203 13:58:31.322493 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d042c55327fe42c18032feedcbcb89d5a0275f42d648331d12b63fb7f5eab7f6" Dec 03 13:58:31.323945 master-0 kubenswrapper[16176]: I1203 13:58:31.323913 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.324001 master-0 kubenswrapper[16176]: I1203 13:58:31.323970 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.324001 master-0 kubenswrapper[16176]: I1203 13:58:31.323982 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.324640 master-0 kubenswrapper[16176]: I1203 13:58:31.324613 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.324640 master-0 kubenswrapper[16176]: I1203 13:58:31.324633 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.324640 master-0 kubenswrapper[16176]: I1203 13:58:31.324642 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.410932 master-0 kubenswrapper[16176]: I1203 13:58:31.410775 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.410932 master-0 kubenswrapper[16176]: I1203 13:58:31.410889 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.410932 master-0 kubenswrapper[16176]: I1203 13:58:31.410926 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411009 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411032 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411052 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411069 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411086 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411102 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411123 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411138 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411152 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411177 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411277 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411349 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411371 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411387 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411406 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411422 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411465 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411493 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411540 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.412524 master-0 kubenswrapper[16176]: I1203 13:58:31.411565 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.461044 master-0 kubenswrapper[16176]: I1203 13:58:31.460951 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.463907 master-0 kubenswrapper[16176]: I1203 13:58:31.463851 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.464061 master-0 kubenswrapper[16176]: I1203 13:58:31.463920 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.464061 master-0 kubenswrapper[16176]: I1203 13:58:31.463933 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.464061 master-0 kubenswrapper[16176]: I1203 13:58:31.463961 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:31.464915 master-0 kubenswrapper[16176]: E1203 13:58:31.464864 16176 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:58:31.513319 master-0 kubenswrapper[16176]: I1203 13:58:31.513169 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513319 master-0 kubenswrapper[16176]: I1203 13:58:31.513249 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513319 master-0 kubenswrapper[16176]: I1203 13:58:31.513319 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513348 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513367 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513387 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513413 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513430 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513457 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513461 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513504 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513538 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513570 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513590 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513587 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513475 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513582 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513581 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513638 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513652 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513678 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513724 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513792 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513822 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513834 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513857 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513885 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513893 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513912 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513885 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513933 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513905 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513965 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.513984 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.513990 master-0 kubenswrapper[16176]: I1203 13:58:31.514005 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514024 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514094 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514115 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514135 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514429 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514464 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514465 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514493 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514489 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514505 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 13:58:31.515201 master-0 kubenswrapper[16176]: I1203 13:58:31.514559 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.625754 master-0 kubenswrapper[16176]: I1203 13:58:31.625684 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:31.626018 master-0 kubenswrapper[16176]: I1203 13:58:31.625726 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 13:58:31.671492 master-0 kubenswrapper[16176]: W1203 13:58:31.670101 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cfbc1ee6cdd01fccdd5a1a088f4d538.slice/crio-341be848ebf4af74008efdd449b5764a91ae1122e65022f43c8e7c51b831904e WatchSource:0}: Error finding container 341be848ebf4af74008efdd449b5764a91ae1122e65022f43c8e7c51b831904e: Status 404 returned error can't find the container with id 341be848ebf4af74008efdd449b5764a91ae1122e65022f43c8e7c51b831904e Dec 03 13:58:31.744647 master-0 kubenswrapper[16176]: I1203 13:58:31.744580 16176 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 13:58:31.865593 master-0 kubenswrapper[16176]: I1203 13:58:31.865509 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:31.869015 master-0 kubenswrapper[16176]: I1203 13:58:31.868972 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:31.869067 master-0 kubenswrapper[16176]: I1203 13:58:31.869028 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:31.869067 master-0 kubenswrapper[16176]: I1203 13:58:31.869039 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:31.869146 master-0 kubenswrapper[16176]: I1203 13:58:31.869070 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:31.870061 master-0 kubenswrapper[16176]: E1203 13:58:31.870018 16176 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:58:32.058791 master-0 kubenswrapper[16176]: I1203 13:58:32.058125 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33" exitCode=0 Dec 03 13:58:32.058791 master-0 kubenswrapper[16176]: I1203 13:58:32.058203 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerDied","Data":"a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33"} Dec 03 13:58:32.058791 master-0 kubenswrapper[16176]: I1203 13:58:32.058303 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"fb0eb4fbc75a8799307fbdaf7c81730c9622bc288fc5002438ecdde1b3e30eab"} Dec 03 13:58:32.058791 master-0 kubenswrapper[16176]: I1203 13:58:32.058477 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.060776 master-0 kubenswrapper[16176]: I1203 13:58:32.060691 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"6cfbc1ee6cdd01fccdd5a1a088f4d538","Type":"ContainerStarted","Data":"341be848ebf4af74008efdd449b5764a91ae1122e65022f43c8e7c51b831904e"} Dec 03 13:58:32.061782 master-0 kubenswrapper[16176]: I1203 13:58:32.061390 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.061782 master-0 kubenswrapper[16176]: I1203 13:58:32.061422 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.061782 master-0 kubenswrapper[16176]: I1203 13:58:32.061471 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.063672 master-0 kubenswrapper[16176]: I1203 13:58:32.063522 16176 generic.go:334] "Generic (PLEG): container finished" podID="13238af3704fe583f617f61e755cf4c2" containerID="d559032002ae450f2dcc5a6551686ae528fbdc12019934f45dbbd1835ac0a064" exitCode=0 Dec 03 13:58:32.063781 master-0 kubenswrapper[16176]: I1203 13:58:32.063716 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.063781 master-0 kubenswrapper[16176]: I1203 13:58:32.063767 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.063873 master-0 kubenswrapper[16176]: I1203 13:58:32.063829 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.063873 master-0 kubenswrapper[16176]: I1203 13:58:32.063727 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.064436 master-0 kubenswrapper[16176]: I1203 13:58:32.064398 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.072646 master-0 kubenswrapper[16176]: I1203 13:58:32.072514 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.072646 master-0 kubenswrapper[16176]: I1203 13:58:32.072653 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.072690 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073220 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073327 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073335 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073379 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073437 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.073654 master-0 kubenswrapper[16176]: I1203 13:58:32.073556 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.076512 master-0 kubenswrapper[16176]: I1203 13:58:32.076474 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.076561 master-0 kubenswrapper[16176]: I1203 13:58:32.076491 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.076561 master-0 kubenswrapper[16176]: I1203 13:58:32.076556 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.076840 master-0 kubenswrapper[16176]: I1203 13:58:32.076570 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.076840 master-0 kubenswrapper[16176]: I1203 13:58:32.076524 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.076840 master-0 kubenswrapper[16176]: I1203 13:58:32.076673 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.521675 master-0 kubenswrapper[16176]: E1203 13:58:32.521504 16176 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db93f1d8e6768 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 13:58:29.72905252 +0000 UTC m=+0.154693182,LastTimestamp:2025-12-03 13:58:29.72905252 +0000 UTC m=+0.154693182,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 13:58:32.670599 master-0 kubenswrapper[16176]: I1203 13:58:32.670525 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:32.673958 master-0 kubenswrapper[16176]: I1203 13:58:32.673889 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:32.674422 master-0 kubenswrapper[16176]: I1203 13:58:32.673990 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:32.674422 master-0 kubenswrapper[16176]: I1203 13:58:32.674021 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:32.674422 master-0 kubenswrapper[16176]: I1203 13:58:32.674077 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:32.676105 master-0 kubenswrapper[16176]: E1203 13:58:32.676041 16176 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 13:58:33.077168 master-0 kubenswrapper[16176]: I1203 13:58:33.077087 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"6cfbc1ee6cdd01fccdd5a1a088f4d538","Type":"ContainerStarted","Data":"2927a79f39ed7802aaaf3f621d8e971809af85925fbb920aac36cdee358d7dd1"} Dec 03 13:58:33.077500 master-0 kubenswrapper[16176]: I1203 13:58:33.077206 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:33.079207 master-0 kubenswrapper[16176]: I1203 13:58:33.079169 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da"} Dec 03 13:58:33.080823 master-0 kubenswrapper[16176]: I1203 13:58:33.080774 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:33.080906 master-0 kubenswrapper[16176]: I1203 13:58:33.080835 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:33.080906 master-0 kubenswrapper[16176]: I1203 13:58:33.080874 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:33.209485 master-0 kubenswrapper[16176]: I1203 13:58:33.209228 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:33.209761 master-0 kubenswrapper[16176]: I1203 13:58:33.209596 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:33.214971 master-0 kubenswrapper[16176]: I1203 13:58:33.214850 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:33.214971 master-0 kubenswrapper[16176]: I1203 13:58:33.214899 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:33.214971 master-0 kubenswrapper[16176]: I1203 13:58:33.214913 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:33.217876 master-0 kubenswrapper[16176]: I1203 13:58:33.217845 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:34.012813 master-0 kubenswrapper[16176]: I1203 13:58:34.011775 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 13:58:34.012813 master-0 kubenswrapper[16176]: I1203 13:58:34.012108 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:34.017772 master-0 kubenswrapper[16176]: I1203 13:58:34.017337 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:34.017772 master-0 kubenswrapper[16176]: I1203 13:58:34.017390 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:34.017772 master-0 kubenswrapper[16176]: I1203 13:58:34.017402 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:34.092208 master-0 kubenswrapper[16176]: I1203 13:58:34.092089 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:34.094637 master-0 kubenswrapper[16176]: I1203 13:58:34.094572 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc"} Dec 03 13:58:34.094724 master-0 kubenswrapper[16176]: I1203 13:58:34.094684 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:34.104304 master-0 kubenswrapper[16176]: I1203 13:58:34.096095 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:34.104304 master-0 kubenswrapper[16176]: I1203 13:58:34.096144 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:34.104304 master-0 kubenswrapper[16176]: I1203 13:58:34.096155 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:34.105597 master-0 kubenswrapper[16176]: I1203 13:58:34.105551 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:34.105675 master-0 kubenswrapper[16176]: I1203 13:58:34.105612 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:34.105675 master-0 kubenswrapper[16176]: I1203 13:58:34.105631 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:34.276910 master-0 kubenswrapper[16176]: I1203 13:58:34.276733 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:34.279674 master-0 kubenswrapper[16176]: I1203 13:58:34.279623 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:34.279674 master-0 kubenswrapper[16176]: I1203 13:58:34.279671 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:34.279674 master-0 kubenswrapper[16176]: I1203 13:58:34.279681 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:34.279981 master-0 kubenswrapper[16176]: I1203 13:58:34.279707 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:34.352942 master-0 kubenswrapper[16176]: I1203 13:58:34.345029 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:34.352942 master-0 kubenswrapper[16176]: I1203 13:58:34.351034 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:34.959211 master-0 kubenswrapper[16176]: I1203 13:58:34.959106 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:34.966248 master-0 kubenswrapper[16176]: I1203 13:58:34.966190 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:35.038663 master-0 kubenswrapper[16176]: I1203 13:58:35.038560 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:35.043133 master-0 kubenswrapper[16176]: I1203 13:58:35.043073 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:58:35.103837 master-0 kubenswrapper[16176]: I1203 13:58:35.103766 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:35.106463 master-0 kubenswrapper[16176]: I1203 13:58:35.106385 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7"} Dec 03 13:58:35.108158 master-0 kubenswrapper[16176]: I1203 13:58:35.108103 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:35.108158 master-0 kubenswrapper[16176]: I1203 13:58:35.108155 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:35.108314 master-0 kubenswrapper[16176]: I1203 13:58:35.108169 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:36.499189 master-0 kubenswrapper[16176]: I1203 13:58:36.497545 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:36.503129 master-0 kubenswrapper[16176]: I1203 13:58:36.503055 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:36.503129 master-0 kubenswrapper[16176]: I1203 13:58:36.503115 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:36.503129 master-0 kubenswrapper[16176]: I1203 13:58:36.503128 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:37.509014 master-0 kubenswrapper[16176]: I1203 13:58:37.508958 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"27b4277c622910e257b98766d94f3182ae3aea1f090b364a9ed8b9175b63d219"} Dec 03 13:58:37.509787 master-0 kubenswrapper[16176]: I1203 13:58:37.509766 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"afce319e99c6717d54fcac45d05cfe13edf74be9e988bfb6ced34d2e5a05b5e8"} Dec 03 13:58:37.509903 master-0 kubenswrapper[16176]: I1203 13:58:37.509149 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:37.512570 master-0 kubenswrapper[16176]: I1203 13:58:37.512512 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:37.512668 master-0 kubenswrapper[16176]: I1203 13:58:37.512599 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:37.512668 master-0 kubenswrapper[16176]: I1203 13:58:37.512616 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:38.161915 master-0 kubenswrapper[16176]: I1203 13:58:38.161866 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:38.167129 master-0 kubenswrapper[16176]: I1203 13:58:38.167089 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:38.523133 master-0 kubenswrapper[16176]: I1203 13:58:38.523018 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/0.log" Dec 03 13:58:38.525851 master-0 kubenswrapper[16176]: I1203 13:58:38.525823 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="27b4277c622910e257b98766d94f3182ae3aea1f090b364a9ed8b9175b63d219" exitCode=255 Dec 03 13:58:38.526500 master-0 kubenswrapper[16176]: I1203 13:58:38.526045 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerDied","Data":"27b4277c622910e257b98766d94f3182ae3aea1f090b364a9ed8b9175b63d219"} Dec 03 13:58:38.526650 master-0 kubenswrapper[16176]: I1203 13:58:38.526627 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:38.526744 master-0 kubenswrapper[16176]: I1203 13:58:38.526097 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:38.529326 master-0 kubenswrapper[16176]: I1203 13:58:38.529225 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:38.529412 master-0 kubenswrapper[16176]: I1203 13:58:38.529355 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:38.529412 master-0 kubenswrapper[16176]: I1203 13:58:38.529382 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:38.530503 master-0 kubenswrapper[16176]: I1203 13:58:38.530460 16176 scope.go:117] "RemoveContainer" containerID="27b4277c622910e257b98766d94f3182ae3aea1f090b364a9ed8b9175b63d219" Dec 03 13:58:38.533097 master-0 kubenswrapper[16176]: I1203 13:58:38.533077 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:39.536017 master-0 kubenswrapper[16176]: I1203 13:58:39.535815 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/0.log" Dec 03 13:58:39.538604 master-0 kubenswrapper[16176]: I1203 13:58:39.538570 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e"} Dec 03 13:58:39.538755 master-0 kubenswrapper[16176]: I1203 13:58:39.538730 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:39.540952 master-0 kubenswrapper[16176]: I1203 13:58:39.540908 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:39.542607 master-0 kubenswrapper[16176]: I1203 13:58:39.542574 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:39.542607 master-0 kubenswrapper[16176]: I1203 13:58:39.542608 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:39.542698 master-0 kubenswrapper[16176]: I1203 13:58:39.542619 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: I1203 13:58:39.556673 16176 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: I1203 13:58:39.556840 16176 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: I1203 13:58:39.556689 16176 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: I1203 13:58:39.557099 16176 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: I1203 13:58:39.557199 16176 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 13:58:39.557425 master-0 kubenswrapper[16176]: E1203 13:58:39.557371 16176 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Dec 03 13:58:39.869743 master-0 kubenswrapper[16176]: I1203 13:58:39.869652 16176 apiserver.go:52] "Watching apiserver" Dec 03 13:58:39.890793 master-0 kubenswrapper[16176]: I1203 13:58:39.890712 16176 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 13:58:39.894686 master-0 kubenswrapper[16176]: I1203 13:58:39.894553 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-dns/node-resolver-4xlhs","openshift-network-operator/iptables-alerter-n24qb","openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw","kube-system/bootstrap-kube-controller-manager-master-0","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-ovn-kubernetes/ovnkube-node-txl6b","openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8","openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-machine-api/machine-api-operator-7486ff55f-wcnxg","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-cluster-node-tuning-operator/tuned-7zkbg","openshift-etcd/etcd-master-0","openshift-multus/multus-admission-controller-78ddcf56f9-8l84w","openshift-service-ca/service-ca-6b8bb995f7-b68p8","openshift-catalogd/catalogd-controller-manager-754cfd84-qf898","openshift-network-diagnostics/network-check-target-pcchm","openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4","openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg","openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7","openshift-insights/insights-operator-59d99f9b7b-74sss","openshift-kube-apiserver/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-multus/multus-kk4tm","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-kube-controller-manager/installer-1-master-0","assisted-installer/assisted-installer-controller-stq5g","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29","openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx","openshift-machine-config-operator/machine-config-daemon-2ztl9","openshift-kube-apiserver/installer-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-kube-scheduler/installer-4-master-0","openshift-marketplace/community-operators-582c5","openshift-marketplace/community-operators-7fwtv","openshift-multus/network-metrics-daemon-ch7xd","openshift-etcd/installer-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-kube-scheduler/installer-5-master-0","openshift-marketplace/certified-operators-t8rt7","openshift-marketplace/redhat-marketplace-mtm6s","openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l","openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h","openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w","openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/redhat-operators-6rjqz","openshift-network-node-identity/network-node-identity-c8csx","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-dns/dns-default-5m4f8","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql","openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6","openshift-apiserver/apiserver-6985f84b49-v9vlg","openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd","openshift-controller-manager/controller-manager-7d8fb964c9-v2h98","openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg","openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 13:58:39.895120 master-0 kubenswrapper[16176]: I1203 13:58:39.895047 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 13:58:39.897988 master-0 kubenswrapper[16176]: I1203 13:58:39.897903 16176 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="50f3000b-6567-4af2-8ea9-ca37d40ead7a" Dec 03 13:58:39.906804 master-0 kubenswrapper[16176]: I1203 13:58:39.906463 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.906804 master-0 kubenswrapper[16176]: I1203 13:58:39.906619 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.906804 master-0 kubenswrapper[16176]: I1203 13:58:39.906765 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:58:39.907201 master-0 kubenswrapper[16176]: I1203 13:58:39.906858 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.907201 master-0 kubenswrapper[16176]: I1203 13:58:39.906978 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.907201 master-0 kubenswrapper[16176]: I1203 13:58:39.907126 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 13:58:39.907201 master-0 kubenswrapper[16176]: I1203 13:58:39.907193 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 13:58:39.907457 master-0 kubenswrapper[16176]: I1203 13:58:39.907325 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 13:58:39.907457 master-0 kubenswrapper[16176]: I1203 13:58:39.907373 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 13:58:39.907457 master-0 kubenswrapper[16176]: I1203 13:58:39.907423 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 13:58:39.907592 master-0 kubenswrapper[16176]: I1203 13:58:39.907509 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:58:39.907592 master-0 kubenswrapper[16176]: I1203 13:58:39.907519 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 13:58:39.907683 master-0 kubenswrapper[16176]: I1203 13:58:39.907613 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 13:58:39.907683 master-0 kubenswrapper[16176]: I1203 13:58:39.907634 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 13:58:39.907772 master-0 kubenswrapper[16176]: I1203 13:58:39.907710 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 13:58:39.907772 master-0 kubenswrapper[16176]: I1203 13:58:39.907727 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 13:58:39.907772 master-0 kubenswrapper[16176]: I1203 13:58:39.907772 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 13:58:39.907913 master-0 kubenswrapper[16176]: I1203 13:58:39.907857 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 13:58:39.907913 master-0 kubenswrapper[16176]: I1203 13:58:39.907868 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 13:58:39.907913 master-0 kubenswrapper[16176]: I1203 13:58:39.907883 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 13:58:39.908035 master-0 kubenswrapper[16176]: I1203 13:58:39.907978 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 13:58:39.908035 master-0 kubenswrapper[16176]: I1203 13:58:39.908008 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 13:58:39.908111 master-0 kubenswrapper[16176]: I1203 13:58:39.908080 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.908296 master-0 kubenswrapper[16176]: I1203 13:58:39.908206 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.908417 master-0 kubenswrapper[16176]: I1203 13:58:39.908379 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.909418 master-0 kubenswrapper[16176]: I1203 13:58:39.908496 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 13:58:39.909418 master-0 kubenswrapper[16176]: I1203 13:58:39.908620 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 13:58:39.911459 master-0 kubenswrapper[16176]: I1203 13:58:39.911394 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 13:58:39.912690 master-0 kubenswrapper[16176]: I1203 13:58:39.912633 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 13:58:39.913048 master-0 kubenswrapper[16176]: I1203 13:58:39.907715 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 13:58:39.918159 master-0 kubenswrapper[16176]: I1203 13:58:39.918109 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 13:58:39.918354 master-0 kubenswrapper[16176]: I1203 13:58:39.918124 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 13:58:39.919624 master-0 kubenswrapper[16176]: I1203 13:58:39.919569 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 13:58:39.919825 master-0 kubenswrapper[16176]: I1203 13:58:39.919728 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 13:58:39.920112 master-0 kubenswrapper[16176]: I1203 13:58:39.920050 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 13:58:39.920188 master-0 kubenswrapper[16176]: I1203 13:58:39.920155 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 13:58:39.920236 master-0 kubenswrapper[16176]: I1203 13:58:39.920209 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 13:58:39.920318 master-0 kubenswrapper[16176]: I1203 13:58:39.920290 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 13:58:39.920429 master-0 kubenswrapper[16176]: I1203 13:58:39.920401 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 13:58:39.920508 master-0 kubenswrapper[16176]: I1203 13:58:39.920471 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 13:58:39.920723 master-0 kubenswrapper[16176]: I1203 13:58:39.920674 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 13:58:39.920723 master-0 kubenswrapper[16176]: I1203 13:58:39.920710 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 13:58:39.920819 master-0 kubenswrapper[16176]: I1203 13:58:39.920748 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 13:58:39.920861 master-0 kubenswrapper[16176]: I1203 13:58:39.920850 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 13:58:39.920960 master-0 kubenswrapper[16176]: I1203 13:58:39.920908 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 13:58:39.921114 master-0 kubenswrapper[16176]: I1203 13:58:39.920955 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 13:58:39.921297 master-0 kubenswrapper[16176]: I1203 13:58:39.921249 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:58:39.921488 master-0 kubenswrapper[16176]: I1203 13:58:39.921468 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 13:58:39.921949 master-0 kubenswrapper[16176]: I1203 13:58:39.921853 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 13:58:39.921949 master-0 kubenswrapper[16176]: I1203 13:58:39.921862 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 13:58:39.922396 master-0 kubenswrapper[16176]: I1203 13:58:39.921980 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 13:58:39.924527 master-0 kubenswrapper[16176]: I1203 13:58:39.924469 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 13:58:39.924654 master-0 kubenswrapper[16176]: I1203 13:58:39.924617 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 13:58:39.924697 master-0 kubenswrapper[16176]: I1203 13:58:39.924664 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 13:58:39.924697 master-0 kubenswrapper[16176]: I1203 13:58:39.924690 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 13:58:39.924830 master-0 kubenswrapper[16176]: I1203 13:58:39.924791 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 13:58:39.924830 master-0 kubenswrapper[16176]: I1203 13:58:39.924820 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 13:58:39.924973 master-0 kubenswrapper[16176]: I1203 13:58:39.924935 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 13:58:39.927177 master-0 kubenswrapper[16176]: I1203 13:58:39.926877 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 13:58:39.927177 master-0 kubenswrapper[16176]: I1203 13:58:39.926970 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 13:58:39.927896 master-0 kubenswrapper[16176]: I1203 13:58:39.927844 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.927981 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.928524 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.928623 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.928899 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.929288 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.929715 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.930080 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.930425 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.930440 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.931045 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.932247 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.932461 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 13:58:39.932573 master-0 kubenswrapper[16176]: I1203 13:58:39.932499 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.932632 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.932710 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.932753 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.933124 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-78ddcf56f9-8l84w_63aae3b9-9a72-497e-af01-5d8b8d0ac876/multus-admission-controller/0.log" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.933282 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.934590 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.935113 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.935539 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.935574 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 13:58:39.937665 master-0 kubenswrapper[16176]: I1203 13:58:39.937034 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 13:58:39.937941 master-0 kubenswrapper[16176]: I1203 13:58:39.937812 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 13:58:39.938021 master-0 kubenswrapper[16176]: I1203 13:58:39.937994 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 13:58:39.938381 master-0 kubenswrapper[16176]: I1203 13:58:39.938322 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 13:58:39.938850 master-0 kubenswrapper[16176]: I1203 13:58:39.938550 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 13:58:39.938850 master-0 kubenswrapper[16176]: I1203 13:58:39.938693 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.940471 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.940700 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.954251 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.957817 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.958484 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.959320 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 13:58:39.962294 master-0 kubenswrapper[16176]: I1203 13:58:39.959457 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 13:58:39.973471 master-0 kubenswrapper[16176]: I1203 13:58:39.973411 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 13:58:39.974411 master-0 kubenswrapper[16176]: I1203 13:58:39.974347 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 13:58:39.974693 master-0 kubenswrapper[16176]: I1203 13:58:39.974649 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 13:58:39.978235 master-0 kubenswrapper[16176]: I1203 13:58:39.975860 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:39.978235 master-0 kubenswrapper[16176]: I1203 13:58:39.976342 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:39.978235 master-0 kubenswrapper[16176]: I1203 13:58:39.976392 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:39.978235 master-0 kubenswrapper[16176]: I1203 13:58:39.976708 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:39.978235 master-0 kubenswrapper[16176]: I1203 13:58:39.976768 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:39.986791 master-0 kubenswrapper[16176]: I1203 13:58:39.986678 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 13:58:39.987492 master-0 kubenswrapper[16176]: I1203 13:58:39.987137 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 13:58:39.987492 master-0 kubenswrapper[16176]: I1203 13:58:39.987438 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 13:58:39.992183 master-0 kubenswrapper[16176]: I1203 13:58:39.990332 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 13:58:39.993477 master-0 kubenswrapper[16176]: I1203 13:58:39.993170 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 13:58:39.993579 master-0 kubenswrapper[16176]: I1203 13:58:39.993232 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 13:58:39.993788 master-0 kubenswrapper[16176]: I1203 13:58:39.993401 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 13:58:39.993875 master-0 kubenswrapper[16176]: I1203 13:58:39.993424 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 13:58:39.996382 master-0 kubenswrapper[16176]: I1203 13:58:39.996239 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 13:58:39.998704 master-0 kubenswrapper[16176]: I1203 13:58:39.998673 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 13:58:40.002472 master-0 kubenswrapper[16176]: I1203 13:58:40.002437 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 13:58:40.020884 master-0 kubenswrapper[16176]: I1203 13:58:40.020166 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 13:58:40.042496 master-0 kubenswrapper[16176]: I1203 13:58:40.042432 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 13:58:40.054379 master-0 kubenswrapper[16176]: I1203 13:58:40.052855 16176 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 13:58:40.059499 master-0 kubenswrapper[16176]: I1203 13:58:40.059457 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 13:58:40.077764 master-0 kubenswrapper[16176]: I1203 13:58:40.077704 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077793 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077877 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077908 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077928 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077947 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077964 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.077989 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.078010 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.078038 master-0 kubenswrapper[16176]: I1203 13:58:40.078049 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078091 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078117 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078161 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078179 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078198 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078220 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078286 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078312 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078335 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078358 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078379 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078401 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078421 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078407 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078437 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078437 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:40.078531 master-0 kubenswrapper[16176]: I1203 13:58:40.078440 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078575 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078612 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078633 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078657 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078678 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078701 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078700 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078747 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078765 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078780 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078887 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078902 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078920 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078952 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.078984 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079023 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079048 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079086 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079115 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079140 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079141 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079169 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079192 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079214 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079235 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.079214 master-0 kubenswrapper[16176]: I1203 13:58:40.079257 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079293 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079327 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079357 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079334 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079382 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079397 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079408 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079500 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079410 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079533 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079576 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079609 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079639 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079713 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079607 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079784 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079809 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079831 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079862 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079884 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079896 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079912 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079958 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079976 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079990 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.079980 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080016 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080013 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080034 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080061 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080143 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080206 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080244 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080319 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080344 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080365 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080402 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080425 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080449 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080472 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.080428 master-0 kubenswrapper[16176]: I1203 13:58:40.080496 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080518 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080541 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080563 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080596 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080560 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080682 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080727 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080779 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080878 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080916 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080953 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.080982 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081004 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081036 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081068 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081099 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081124 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081130 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081147 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081178 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081214 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081244 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081292 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081325 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081330 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081057 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081396 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081451 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081453 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081529 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081585 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081759 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081665 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081865 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.081985 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082016 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082054 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082073 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082099 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082163 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.082235 master-0 kubenswrapper[16176]: I1203 13:58:40.082184 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082456 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082476 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082508 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082533 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082553 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082577 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082598 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082619 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082640 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082682 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082703 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082723 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082749 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082768 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082794 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082814 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082822 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082844 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082897 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082926 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082954 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.082981 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083004 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083027 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083048 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083071 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083091 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083114 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083138 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083158 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083177 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083206 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083243 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083274 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083317 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083371 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083393 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083413 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083440 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083489 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083516 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083538 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083556 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083555 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083576 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083602 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083624 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083643 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083655 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083665 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083687 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083707 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083731 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083752 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083771 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083796 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083814 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083833 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083857 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083873 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083894 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:58:40.083888 master-0 kubenswrapper[16176]: I1203 13:58:40.083917 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.083920 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085224 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084097 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084213 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084725 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084730 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084894 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084911 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.084972 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085011 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085426 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085477 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.083944 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085619 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085189 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085418 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085202 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085950 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085986 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086035 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086101 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086157 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086201 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086233 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086239 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086293 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086314 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086325 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086359 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086450 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086464 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086510 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086540 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086572 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086611 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086662 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086708 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086748 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086775 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086771 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.085061 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086809 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086849 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086879 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086909 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086911 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086961 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.086883 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087043 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087080 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087112 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087134 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087155 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087176 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087196 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087217 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087235 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087277 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087314 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087363 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087383 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087403 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087422 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087440 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087460 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087478 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087500 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087520 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087540 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087560 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087580 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087599 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087619 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087639 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087657 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087664 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087679 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087698 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087715 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087732 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087722 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087840 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087864 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087882 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087898 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087921 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087960 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087988 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088000 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088016 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.087929 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088108 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088210 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088248 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088308 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088342 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088381 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088423 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088443 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088454 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088573 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088672 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088682 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088709 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088740 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088789 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088824 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088914 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088915 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088964 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088999 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.088997 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089031 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089103 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089139 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089147 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089165 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089146 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089203 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089212 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.089039 master-0 kubenswrapper[16176]: I1203 13:58:40.089286 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089327 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089350 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089380 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089408 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089433 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089460 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089500 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089520 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089531 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089544 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089757 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.089888 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090030 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090062 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090140 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090168 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090077 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:40.094608 master-0 kubenswrapper[16176]: I1203 13:58:40.090473 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:40.101028 master-0 kubenswrapper[16176]: I1203 13:58:40.100977 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 13:58:40.109683 master-0 kubenswrapper[16176]: I1203 13:58:40.109635 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:40.119161 master-0 kubenswrapper[16176]: I1203 13:58:40.118782 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 13:58:40.142377 master-0 kubenswrapper[16176]: I1203 13:58:40.138596 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 13:58:40.142377 master-0 kubenswrapper[16176]: I1203 13:58:40.140013 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.160053 master-0 kubenswrapper[16176]: I1203 13:58:40.159160 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 13:58:40.164948 master-0 kubenswrapper[16176]: I1203 13:58:40.164900 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.179476 master-0 kubenswrapper[16176]: I1203 13:58:40.179397 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 13:58:40.193537 master-0 kubenswrapper[16176]: I1203 13:58:40.193478 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193537 master-0 kubenswrapper[16176]: I1203 13:58:40.193540 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193595 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193618 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193627 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193677 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193698 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193724 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193729 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193741 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193737 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193786 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193800 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.193798 master-0 kubenswrapper[16176]: I1203 13:58:40.193759 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.193822 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.193828 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.193881 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.193926 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194028 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194059 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194087 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194113 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194131 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194165 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194186 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194206 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194270 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.194344 master-0 kubenswrapper[16176]: I1203 13:58:40.194317 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194360 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194401 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194443 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194481 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194520 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194538 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194579 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194609 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194638 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194655 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194719 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.194757 master-0 kubenswrapper[16176]: I1203 13:58:40.194756 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194778 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194829 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194857 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194934 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194950 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194968 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194987 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.194998 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195015 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195057 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195089 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195099 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195129 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195136 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195145 master-0 kubenswrapper[16176]: I1203 13:58:40.195161 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195190 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195201 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195244 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195247 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195275 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195295 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195317 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195327 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195370 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195385 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195403 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195420 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195408 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195464 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195468 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195490 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195499 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195523 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195528 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195628 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195640 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195669 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195699 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195723 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195742 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195769 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195781 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195800 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.195797 master-0 kubenswrapper[16176]: I1203 13:58:40.195833 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195865 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195896 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195922 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195955 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195957 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195996 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196000 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196049 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195670 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196110 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196123 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196146 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196167 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196176 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196241 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196296 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196312 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196325 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196358 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196360 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196380 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196398 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196418 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196438 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196442 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196473 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196474 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196490 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196503 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196566 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196570 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196590 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196598 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196612 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196628 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.195298 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196649 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196672 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196676 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:40.196968 master-0 kubenswrapper[16176]: I1203 13:58:40.196696 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.202692 master-0 kubenswrapper[16176]: I1203 13:58:40.200189 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 13:58:40.202692 master-0 kubenswrapper[16176]: I1203 13:58:40.200639 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.218048 master-0 kubenswrapper[16176]: I1203 13:58:40.217982 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 13:58:40.218710 master-0 kubenswrapper[16176]: I1203 13:58:40.218598 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.245769 master-0 kubenswrapper[16176]: I1203 13:58:40.245701 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 13:58:40.254977 master-0 kubenswrapper[16176]: I1203 13:58:40.254868 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.257949 master-0 kubenswrapper[16176]: I1203 13:58:40.257927 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 13:58:40.268852 master-0 kubenswrapper[16176]: I1203 13:58:40.268311 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:40.279606 master-0 kubenswrapper[16176]: I1203 13:58:40.279542 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 13:58:40.280822 master-0 kubenswrapper[16176]: I1203 13:58:40.280694 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.282753 master-0 kubenswrapper[16176]: I1203 13:58:40.282724 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 13:58:40.299254 master-0 kubenswrapper[16176]: I1203 13:58:40.298882 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 13:58:40.305748 master-0 kubenswrapper[16176]: I1203 13:58:40.305042 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.312822 master-0 kubenswrapper[16176]: I1203 13:58:40.312768 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 13:58:40.318027 master-0 kubenswrapper[16176]: I1203 13:58:40.317980 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 13:58:40.320548 master-0 kubenswrapper[16176]: I1203 13:58:40.320442 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.339174 master-0 kubenswrapper[16176]: I1203 13:58:40.339098 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 13:58:40.364202 master-0 kubenswrapper[16176]: I1203 13:58:40.364133 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 13:58:40.365956 master-0 kubenswrapper[16176]: I1203 13:58:40.365906 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:40.379182 master-0 kubenswrapper[16176]: I1203 13:58:40.379123 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 13:58:40.397636 master-0 kubenswrapper[16176]: I1203 13:58:40.397347 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 13:58:40.418488 master-0 kubenswrapper[16176]: I1203 13:58:40.418414 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 13:58:40.420521 master-0 kubenswrapper[16176]: I1203 13:58:40.420484 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.440216 master-0 kubenswrapper[16176]: I1203 13:58:40.440118 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 13:58:40.458483 master-0 kubenswrapper[16176]: I1203 13:58:40.458419 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 13:58:40.459932 master-0 kubenswrapper[16176]: I1203 13:58:40.459887 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.478600 master-0 kubenswrapper[16176]: I1203 13:58:40.478546 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 13:58:40.488408 master-0 kubenswrapper[16176]: I1203 13:58:40.488343 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.497498 master-0 kubenswrapper[16176]: I1203 13:58:40.497452 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 13:58:40.501243 master-0 kubenswrapper[16176]: I1203 13:58:40.501196 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.518694 master-0 kubenswrapper[16176]: I1203 13:58:40.518627 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 13:58:40.519871 master-0 kubenswrapper[16176]: I1203 13:58:40.519830 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.539290 master-0 kubenswrapper[16176]: I1203 13:58:40.539212 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 13:58:40.563476 master-0 kubenswrapper[16176]: I1203 13:58:40.563390 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:40.566950 master-0 kubenswrapper[16176]: I1203 13:58:40.566070 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 13:58:40.569029 master-0 kubenswrapper[16176]: I1203 13:58:40.568987 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/1.log" Dec 03 13:58:40.569767 master-0 kubenswrapper[16176]: I1203 13:58:40.569717 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/0.log" Dec 03 13:58:40.572422 master-0 kubenswrapper[16176]: I1203 13:58:40.571998 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" exitCode=255 Dec 03 13:58:40.572720 master-0 kubenswrapper[16176]: I1203 13:58:40.572429 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerDied","Data":"9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e"} Dec 03 13:58:40.572720 master-0 kubenswrapper[16176]: I1203 13:58:40.572569 16176 scope.go:117] "RemoveContainer" containerID="27b4277c622910e257b98766d94f3182ae3aea1f090b364a9ed8b9175b63d219" Dec 03 13:58:40.573211 master-0 kubenswrapper[16176]: I1203 13:58:40.573118 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-78ddcf56f9-8l84w" Dec 03 13:58:40.579750 master-0 kubenswrapper[16176]: I1203 13:58:40.579697 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 13:58:40.589668 master-0 kubenswrapper[16176]: I1203 13:58:40.589363 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 13:58:40.589668 master-0 kubenswrapper[16176]: I1203 13:58:40.589376 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:58:40.604762 master-0 kubenswrapper[16176]: I1203 13:58:40.604690 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 13:58:40.613408 master-0 kubenswrapper[16176]: I1203 13:58:40.613343 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:40.617955 master-0 kubenswrapper[16176]: I1203 13:58:40.617913 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 13:58:40.641431 master-0 kubenswrapper[16176]: I1203 13:58:40.641365 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 13:58:40.659707 master-0 kubenswrapper[16176]: I1203 13:58:40.659546 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 13:58:40.684667 master-0 kubenswrapper[16176]: I1203 13:58:40.684568 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 13:58:40.687853 master-0 kubenswrapper[16176]: I1203 13:58:40.687777 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:58:40.700099 master-0 kubenswrapper[16176]: I1203 13:58:40.700020 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 13:58:40.718814 master-0 kubenswrapper[16176]: I1203 13:58:40.718727 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 13:58:40.728125 master-0 kubenswrapper[16176]: I1203 13:58:40.728047 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:40.747442 master-0 kubenswrapper[16176]: I1203 13:58:40.742532 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 13:58:40.747442 master-0 kubenswrapper[16176]: I1203 13:58:40.746930 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:58:40.763159 master-0 kubenswrapper[16176]: I1203 13:58:40.762880 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 13:58:40.779030 master-0 kubenswrapper[16176]: I1203 13:58:40.778958 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 13:58:40.808187 master-0 kubenswrapper[16176]: I1203 13:58:40.808136 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 13:58:40.810798 master-0 kubenswrapper[16176]: I1203 13:58:40.810749 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:40.818310 master-0 kubenswrapper[16176]: I1203 13:58:40.818213 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 13:58:40.837563 master-0 kubenswrapper[16176]: I1203 13:58:40.837511 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 13:58:40.859178 master-0 kubenswrapper[16176]: I1203 13:58:40.859102 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 13:58:40.878401 master-0 kubenswrapper[16176]: I1203 13:58:40.878338 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 13:58:40.899595 master-0 kubenswrapper[16176]: I1203 13:58:40.899535 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 13:58:40.900857 master-0 kubenswrapper[16176]: I1203 13:58:40.900811 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.918458 master-0 kubenswrapper[16176]: I1203 13:58:40.917625 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:40.918458 master-0 kubenswrapper[16176]: I1203 13:58:40.917965 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 13:58:40.926134 master-0 kubenswrapper[16176]: I1203 13:58:40.926100 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:40.936081 master-0 kubenswrapper[16176]: I1203 13:58:40.936023 16176 request.go:700] Waited for 1.004869438s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0 Dec 03 13:58:40.938697 master-0 kubenswrapper[16176]: I1203 13:58:40.938652 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 13:58:40.939660 master-0 kubenswrapper[16176]: I1203 13:58:40.939610 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:58:40.958656 master-0 kubenswrapper[16176]: I1203 13:58:40.958600 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 13:58:40.977804 master-0 kubenswrapper[16176]: I1203 13:58:40.977752 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 13:58:40.998196 master-0 kubenswrapper[16176]: I1203 13:58:40.998161 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 13:58:41.026048 master-0 kubenswrapper[16176]: I1203 13:58:41.025987 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") pod \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " Dec 03 13:58:41.026334 master-0 kubenswrapper[16176]: I1203 13:58:41.026088 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") pod \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " Dec 03 13:58:41.026373 master-0 kubenswrapper[16176]: I1203 13:58:41.026295 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock" (OuterVolumeSpecName: "var-lock") pod "50f28c77-b15c-4b86-93c8-221c0cc82bb2" (UID: "50f28c77-b15c-4b86-93c8-221c0cc82bb2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:58:41.026686 master-0 kubenswrapper[16176]: I1203 13:58:41.026616 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "50f28c77-b15c-4b86-93c8-221c0cc82bb2" (UID: "50f28c77-b15c-4b86-93c8-221c0cc82bb2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:58:41.026780 master-0 kubenswrapper[16176]: I1203 13:58:41.026684 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 13:58:41.027049 master-0 kubenswrapper[16176]: I1203 13:58:41.027027 16176 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:41.027049 master-0 kubenswrapper[16176]: I1203 13:58:41.027050 16176 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/50f28c77-b15c-4b86-93c8-221c0cc82bb2-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:41.031522 master-0 kubenswrapper[16176]: I1203 13:58:41.031490 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:41.038316 master-0 kubenswrapper[16176]: I1203 13:58:41.038247 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 13:58:41.040574 master-0 kubenswrapper[16176]: I1203 13:58:41.040537 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:41.057821 master-0 kubenswrapper[16176]: I1203 13:58:41.057775 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 13:58:41.066872 master-0 kubenswrapper[16176]: I1203 13:58:41.066826 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.077682 master-0 kubenswrapper[16176]: I1203 13:58:41.077628 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 13:58:41.079337 master-0 kubenswrapper[16176]: E1203 13:58:41.079312 16176 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.079337 master-0 kubenswrapper[16176]: E1203 13:58:41.079327 16176 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.079457 master-0 kubenswrapper[16176]: E1203 13:58:41.079343 16176 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.079457 master-0 kubenswrapper[16176]: E1203 13:58:41.079415 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images podName:85820c13-e5cf-4af1-bd1c-dd74ea151cac nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.579392148 +0000 UTC m=+12.005032810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images") pod "cluster-cloud-controller-manager-operator-76f56467d7-252sh" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.079539 master-0 kubenswrapper[16176]: E1203 13:58:41.079481 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.57944971 +0000 UTC m=+12.005090372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.079539 master-0 kubenswrapper[16176]: E1203 13:58:41.079502 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.579493811 +0000 UTC m=+12.005134473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.079729 master-0 kubenswrapper[16176]: E1203 13:58:41.079702 16176 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.079874 master-0 kubenswrapper[16176]: E1203 13:58:41.079857 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config podName:799e819f-f4b2-4ac9-8fa4-7d4da7a79285 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.579827241 +0000 UTC m=+12.005468133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config") pod "machine-config-daemon-2ztl9" (UID: "799e819f-f4b2-4ac9-8fa4-7d4da7a79285") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080004 master-0 kubenswrapper[16176]: E1203 13:58:41.079991 16176 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.080112 master-0 kubenswrapper[16176]: E1203 13:58:41.080102 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert podName:ecc68b17-9112-471d-89f9-15bf30dfa004 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.580092448 +0000 UTC m=+12.005733100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert") pod "route-controller-manager-6fcd4b8856-ztns6" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.080187 master-0 kubenswrapper[16176]: E1203 13:58:41.080010 16176 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.080301 master-0 kubenswrapper[16176]: E1203 13:58:41.080289 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.580277114 +0000 UTC m=+12.005917776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.080412 master-0 kubenswrapper[16176]: E1203 13:58:41.080399 16176 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080539 master-0 kubenswrapper[16176]: E1203 13:58:41.080527 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.580514821 +0000 UTC m=+12.006155673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080643 master-0 kubenswrapper[16176]: E1203 13:58:41.080615 16176 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080691 master-0 kubenswrapper[16176]: E1203 13:58:41.080677 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.580664905 +0000 UTC m=+12.006305567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080791 master-0 kubenswrapper[16176]: E1203 13:58:41.080776 16176 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080888 master-0 kubenswrapper[16176]: E1203 13:58:41.080875 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config podName:85820c13-e5cf-4af1-bd1c-dd74ea151cac nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.580865471 +0000 UTC m=+12.006506133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-76f56467d7-252sh" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.080989 master-0 kubenswrapper[16176]: E1203 13:58:41.080977 16176 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.081115 master-0 kubenswrapper[16176]: E1203 13:58:41.081102 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.581094627 +0000 UTC m=+12.006735289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.081752 master-0 kubenswrapper[16176]: E1203 13:58:41.081729 16176 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.081907 master-0 kubenswrapper[16176]: E1203 13:58:41.081892 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.58187519 +0000 UTC m=+12.007515852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.081984 master-0 kubenswrapper[16176]: E1203 13:58:41.081752 16176 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.082084 master-0 kubenswrapper[16176]: E1203 13:58:41.082070 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.582058886 +0000 UTC m=+12.007699548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.082156 master-0 kubenswrapper[16176]: E1203 13:58:41.081784 16176 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.082240 master-0 kubenswrapper[16176]: E1203 13:58:41.082230 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca podName:ecc68b17-9112-471d-89f9-15bf30dfa004 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.58222233 +0000 UTC m=+12.007862992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca") pod "route-controller-manager-6fcd4b8856-ztns6" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.082328 master-0 kubenswrapper[16176]: E1203 13:58:41.081781 16176 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.082405 master-0 kubenswrapper[16176]: E1203 13:58:41.082396 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls podName:799e819f-f4b2-4ac9-8fa4-7d4da7a79285 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.582388145 +0000 UTC m=+12.008028807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls") pod "machine-config-daemon-2ztl9" (UID: "799e819f-f4b2-4ac9-8fa4-7d4da7a79285") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.082560 master-0 kubenswrapper[16176]: E1203 13:58:41.082544 16176 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.082678 master-0 kubenswrapper[16176]: E1203 13:58:41.082668 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.582658863 +0000 UTC m=+12.008299525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.085096 master-0 kubenswrapper[16176]: E1203 13:58:41.085059 16176 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.085161 master-0 kubenswrapper[16176]: E1203 13:58:41.085106 16176 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.085161 master-0 kubenswrapper[16176]: E1203 13:58:41.085143 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.585116134 +0000 UTC m=+12.010756796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.085241 master-0 kubenswrapper[16176]: E1203 13:58:41.085169 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.585157546 +0000 UTC m=+12.010798208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086304 master-0 kubenswrapper[16176]: E1203 13:58:41.086276 16176 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086364 master-0 kubenswrapper[16176]: E1203 13:58:41.086328 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586317409 +0000 UTC m=+12.011958061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086364 master-0 kubenswrapper[16176]: E1203 13:58:41.086350 16176 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.086439 master-0 kubenswrapper[16176]: E1203 13:58:41.086376 16176 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086439 master-0 kubenswrapper[16176]: E1203 13:58:41.086404 16176 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086439 master-0 kubenswrapper[16176]: E1203 13:58:41.086380 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls podName:85820c13-e5cf-4af1-bd1c-dd74ea151cac nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586373041 +0000 UTC m=+12.012013703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-76f56467d7-252sh" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.086535 master-0 kubenswrapper[16176]: E1203 13:58:41.086444 16176 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086535 master-0 kubenswrapper[16176]: E1203 13:58:41.086445 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config podName:ecc68b17-9112-471d-89f9-15bf30dfa004 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586433343 +0000 UTC m=+12.012074075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config") pod "route-controller-manager-6fcd4b8856-ztns6" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086535 master-0 kubenswrapper[16176]: E1203 13:58:41.086466 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586458433 +0000 UTC m=+12.012099095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086535 master-0 kubenswrapper[16176]: E1203 13:58:41.086482 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config podName:f5f23b6d-8303-46d8-892e-8e2c01b567b5 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586474854 +0000 UTC m=+12.012115506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config") pod "controller-manager-7d8fb964c9-v2h98" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086707 master-0 kubenswrapper[16176]: E1203 13:58:41.086691 16176 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086816 master-0 kubenswrapper[16176]: E1203 13:58:41.086805 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.586793133 +0000 UTC m=+12.012433795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.086954 master-0 kubenswrapper[16176]: E1203 13:58:41.086918 16176 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.087073 master-0 kubenswrapper[16176]: E1203 13:58:41.087063 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca podName:f5f23b6d-8303-46d8-892e-8e2c01b567b5 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.587054771 +0000 UTC m=+12.012695433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca") pod "controller-manager-7d8fb964c9-v2h98" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.088666 master-0 kubenswrapper[16176]: E1203 13:58:41.088632 16176 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.088730 master-0 kubenswrapper[16176]: E1203 13:58:41.088698 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.588677448 +0000 UTC m=+12.014318300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.088770 master-0 kubenswrapper[16176]: E1203 13:58:41.088738 16176 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.088770 master-0 kubenswrapper[16176]: E1203 13:58:41.088764 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.58875808 +0000 UTC m=+12.014398742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.088844 master-0 kubenswrapper[16176]: E1203 13:58:41.088783 16176 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.088844 master-0 kubenswrapper[16176]: E1203 13:58:41.088809 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.588802632 +0000 UTC m=+12.014443294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.088907 master-0 kubenswrapper[16176]: E1203 13:58:41.088825 16176 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.088990 master-0 kubenswrapper[16176]: E1203 13:58:41.088963 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles podName:f5f23b6d-8303-46d8-892e-8e2c01b567b5 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.588933775 +0000 UTC m=+12.014574447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles") pod "controller-manager-7d8fb964c9-v2h98" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5") : failed to sync configmap cache: timed out waiting for the condition Dec 03 13:58:41.089962 master-0 kubenswrapper[16176]: E1203 13:58:41.089935 16176 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.090019 master-0 kubenswrapper[16176]: E1203 13:58:41.089996 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.589983786 +0000 UTC m=+12.015624448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.090124 master-0 kubenswrapper[16176]: E1203 13:58:41.090108 16176 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.090225 master-0 kubenswrapper[16176]: E1203 13:58:41.090214 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert podName:f5f23b6d-8303-46d8-892e-8e2c01b567b5 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:41.590203622 +0000 UTC m=+12.015844284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert") pod "controller-manager-7d8fb964c9-v2h98" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5") : failed to sync secret cache: timed out waiting for the condition Dec 03 13:58:41.097989 master-0 kubenswrapper[16176]: I1203 13:58:41.097958 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 13:58:41.117137 master-0 kubenswrapper[16176]: I1203 13:58:41.117057 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 13:58:41.137450 master-0 kubenswrapper[16176]: I1203 13:58:41.137382 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 13:58:41.157639 master-0 kubenswrapper[16176]: I1203 13:58:41.157595 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 13:58:41.177182 master-0 kubenswrapper[16176]: I1203 13:58:41.177037 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 13:58:41.197657 master-0 kubenswrapper[16176]: I1203 13:58:41.197613 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 13:58:41.218073 master-0 kubenswrapper[16176]: I1203 13:58:41.218002 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 13:58:41.248306 master-0 kubenswrapper[16176]: I1203 13:58:41.242719 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 13:58:41.257754 master-0 kubenswrapper[16176]: I1203 13:58:41.257697 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 13:58:41.277747 master-0 kubenswrapper[16176]: I1203 13:58:41.277680 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 13:58:41.298195 master-0 kubenswrapper[16176]: I1203 13:58:41.298144 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 13:58:41.319515 master-0 kubenswrapper[16176]: I1203 13:58:41.319466 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 13:58:41.338450 master-0 kubenswrapper[16176]: I1203 13:58:41.338379 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 13:58:41.360104 master-0 kubenswrapper[16176]: I1203 13:58:41.360055 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 13:58:41.377533 master-0 kubenswrapper[16176]: I1203 13:58:41.377476 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 13:58:41.398205 master-0 kubenswrapper[16176]: I1203 13:58:41.398145 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 13:58:41.418378 master-0 kubenswrapper[16176]: I1203 13:58:41.418312 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7ctx2" Dec 03 13:58:41.438595 master-0 kubenswrapper[16176]: I1203 13:58:41.438418 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 13:58:41.458960 master-0 kubenswrapper[16176]: I1203 13:58:41.458897 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 13:58:41.478363 master-0 kubenswrapper[16176]: I1203 13:58:41.478303 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 13:58:41.497539 master-0 kubenswrapper[16176]: I1203 13:58:41.497498 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 13:58:41.519948 master-0 kubenswrapper[16176]: I1203 13:58:41.519874 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 13:58:41.538024 master-0 kubenswrapper[16176]: I1203 13:58:41.537961 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 13:58:41.558180 master-0 kubenswrapper[16176]: I1203 13:58:41.558077 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 13:58:41.581056 master-0 kubenswrapper[16176]: I1203 13:58:41.580961 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 13:58:41.585483 master-0 kubenswrapper[16176]: I1203 13:58:41.585437 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/1.log" Dec 03 13:58:41.590364 master-0 kubenswrapper[16176]: I1203 13:58:41.588715 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"50f28c77-b15c-4b86-93c8-221c0cc82bb2","Type":"ContainerDied","Data":"0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4"} Dec 03 13:58:41.590364 master-0 kubenswrapper[16176]: I1203 13:58:41.588793 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f818fbb8a023f88832f807d7a282f25eef3ce187580242eb861097b89a358b4" Dec 03 13:58:41.590364 master-0 kubenswrapper[16176]: I1203 13:58:41.588759 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 13:58:41.598354 master-0 kubenswrapper[16176]: I1203 13:58:41.598275 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 13:58:41.619039 master-0 kubenswrapper[16176]: I1203 13:58:41.618963 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 13:58:41.638593 master-0 kubenswrapper[16176]: I1203 13:58:41.638527 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 13:58:41.639356 master-0 kubenswrapper[16176]: I1203 13:58:41.639286 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.639356 master-0 kubenswrapper[16176]: I1203 13:58:41.639347 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.639453 master-0 kubenswrapper[16176]: I1203 13:58:41.639383 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.639453 master-0 kubenswrapper[16176]: I1203 13:58:41.639414 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.639509 master-0 kubenswrapper[16176]: I1203 13:58:41.639450 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.639509 master-0 kubenswrapper[16176]: I1203 13:58:41.639488 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.639571 master-0 kubenswrapper[16176]: I1203 13:58:41.639513 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:41.639571 master-0 kubenswrapper[16176]: I1203 13:58:41.639546 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.639674 master-0 kubenswrapper[16176]: I1203 13:58:41.639641 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.639755 master-0 kubenswrapper[16176]: I1203 13:58:41.639728 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:41.639797 master-0 kubenswrapper[16176]: I1203 13:58:41.639762 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.639797 master-0 kubenswrapper[16176]: I1203 13:58:41.639768 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:41.639853 master-0 kubenswrapper[16176]: I1203 13:58:41.639796 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.639914 master-0 kubenswrapper[16176]: I1203 13:58:41.639830 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.639964 master-0 kubenswrapper[16176]: I1203 13:58:41.639920 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.640058 master-0 kubenswrapper[16176]: I1203 13:58:41.640018 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.640058 master-0 kubenswrapper[16176]: I1203 13:58:41.640050 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.640128 master-0 kubenswrapper[16176]: I1203 13:58:41.640064 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.640128 master-0 kubenswrapper[16176]: I1203 13:58:41.640119 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.640289 master-0 kubenswrapper[16176]: I1203 13:58:41.640196 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:41.640289 master-0 kubenswrapper[16176]: I1203 13:58:41.640223 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:41.640289 master-0 kubenswrapper[16176]: I1203 13:58:41.640245 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.640390 master-0 kubenswrapper[16176]: I1203 13:58:41.640345 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:41.640390 master-0 kubenswrapper[16176]: I1203 13:58:41.640382 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:41.640483 master-0 kubenswrapper[16176]: I1203 13:58:41.640415 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:41.640483 master-0 kubenswrapper[16176]: I1203 13:58:41.640448 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.640558 master-0 kubenswrapper[16176]: I1203 13:58:41.640488 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:41.640558 master-0 kubenswrapper[16176]: I1203 13:58:41.640515 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.640558 master-0 kubenswrapper[16176]: I1203 13:58:41.640543 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:41.640682 master-0 kubenswrapper[16176]: I1203 13:58:41.640611 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:41.640739 master-0 kubenswrapper[16176]: I1203 13:58:41.640678 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:41.640739 master-0 kubenswrapper[16176]: I1203 13:58:41.640685 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.640739 master-0 kubenswrapper[16176]: I1203 13:58:41.640723 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:41.640828 master-0 kubenswrapper[16176]: I1203 13:58:41.640740 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:41.640828 master-0 kubenswrapper[16176]: I1203 13:58:41.640728 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.640828 master-0 kubenswrapper[16176]: I1203 13:58:41.640770 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:41.640828 master-0 kubenswrapper[16176]: I1203 13:58:41.640792 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.640828 master-0 kubenswrapper[16176]: I1203 13:58:41.640823 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:41.641019 master-0 kubenswrapper[16176]: I1203 13:58:41.640907 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.641019 master-0 kubenswrapper[16176]: I1203 13:58:41.640951 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.641019 master-0 kubenswrapper[16176]: I1203 13:58:41.641005 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.641140 master-0 kubenswrapper[16176]: I1203 13:58:41.641025 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.641140 master-0 kubenswrapper[16176]: I1203 13:58:41.641091 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:41.641201 master-0 kubenswrapper[16176]: I1203 13:58:41.641187 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:41.641356 master-0 kubenswrapper[16176]: I1203 13:58:41.641330 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:41.641448 master-0 kubenswrapper[16176]: I1203 13:58:41.641427 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:41.641597 master-0 kubenswrapper[16176]: I1203 13:58:41.641505 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:41.660326 master-0 kubenswrapper[16176]: I1203 13:58:41.660235 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 13:58:41.677590 master-0 kubenswrapper[16176]: I1203 13:58:41.677514 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 13:58:41.698302 master-0 kubenswrapper[16176]: I1203 13:58:41.698120 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 13:58:41.701348 master-0 kubenswrapper[16176]: I1203 13:58:41.701254 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.723601 master-0 kubenswrapper[16176]: I1203 13:58:41.723536 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 13:58:41.731180 master-0 kubenswrapper[16176]: I1203 13:58:41.731113 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.739724 master-0 kubenswrapper[16176]: I1203 13:58:41.739668 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 13:58:41.750715 master-0 kubenswrapper[16176]: I1203 13:58:41.750650 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:41.757727 master-0 kubenswrapper[16176]: I1203 13:58:41.757672 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 13:58:41.777856 master-0 kubenswrapper[16176]: I1203 13:58:41.777785 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 13:58:41.780238 master-0 kubenswrapper[16176]: I1203 13:58:41.780183 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.797843 master-0 kubenswrapper[16176]: I1203 13:58:41.797778 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 13:58:41.801079 master-0 kubenswrapper[16176]: I1203 13:58:41.801035 16176 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Dec 03 13:58:41.817651 master-0 kubenswrapper[16176]: I1203 13:58:41.817593 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 13:58:41.838057 master-0 kubenswrapper[16176]: I1203 13:58:41.837956 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 13:58:41.841948 master-0 kubenswrapper[16176]: I1203 13:58:41.841909 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.858407 master-0 kubenswrapper[16176]: I1203 13:58:41.857980 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 13:58:41.862048 master-0 kubenswrapper[16176]: I1203 13:58:41.861852 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:41.866759 master-0 kubenswrapper[16176]: I1203 13:58:41.866681 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:41.877745 master-0 kubenswrapper[16176]: I1203 13:58:41.877653 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 13:58:41.897931 master-0 kubenswrapper[16176]: I1203 13:58:41.897844 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 13:58:41.917564 master-0 kubenswrapper[16176]: I1203 13:58:41.917441 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 13:58:41.920996 master-0 kubenswrapper[16176]: I1203 13:58:41.920934 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:41.944484 master-0 kubenswrapper[16176]: I1203 13:58:41.944409 16176 request.go:700] Waited for 1.985778884s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0 Dec 03 13:58:41.955117 master-0 kubenswrapper[16176]: I1203 13:58:41.954960 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 13:58:41.957044 master-0 kubenswrapper[16176]: I1203 13:58:41.957019 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 13:58:41.960361 master-0 kubenswrapper[16176]: I1203 13:58:41.960323 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:41.977454 master-0 kubenswrapper[16176]: I1203 13:58:41.977396 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 13:58:41.980371 master-0 kubenswrapper[16176]: I1203 13:58:41.980323 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:41.997947 master-0 kubenswrapper[16176]: I1203 13:58:41.997861 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 13:58:42.017590 master-0 kubenswrapper[16176]: I1203 13:58:42.017492 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 13:58:42.037622 master-0 kubenswrapper[16176]: I1203 13:58:42.037513 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 13:58:42.041198 master-0 kubenswrapper[16176]: I1203 13:58:42.041136 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:42.057772 master-0 kubenswrapper[16176]: I1203 13:58:42.057721 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 13:58:42.084005 master-0 kubenswrapper[16176]: I1203 13:58:42.083916 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 13:58:42.091455 master-0 kubenswrapper[16176]: I1203 13:58:42.091411 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:42.098441 master-0 kubenswrapper[16176]: I1203 13:58:42.098385 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 13:58:42.118746 master-0 kubenswrapper[16176]: I1203 13:58:42.118644 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 13:58:42.121786 master-0 kubenswrapper[16176]: I1203 13:58:42.121741 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:42.138105 master-0 kubenswrapper[16176]: I1203 13:58:42.138063 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 13:58:42.158377 master-0 kubenswrapper[16176]: I1203 13:58:42.158307 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 13:58:42.757871 master-0 kubenswrapper[16176]: I1203 13:58:42.757792 16176 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 13:58:42.761485 master-0 kubenswrapper[16176]: I1203 13:58:42.761423 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 13:58:42.761561 master-0 kubenswrapper[16176]: I1203 13:58:42.761513 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 13:58:42.761561 master-0 kubenswrapper[16176]: I1203 13:58:42.761540 16176 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 13:58:42.761971 master-0 kubenswrapper[16176]: I1203 13:58:42.761935 16176 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 13:58:42.819576 master-0 kubenswrapper[16176]: I1203 13:58:42.819512 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"cluster-cloud-controller-manager-operator-76f56467d7-252sh\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:58:42.819996 master-0 kubenswrapper[16176]: I1203 13:58:42.819967 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 13:58:42.820213 master-0 kubenswrapper[16176]: I1203 13:58:42.820175 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 13:58:42.820295 master-0 kubenswrapper[16176]: I1203 13:58:42.820226 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 13:58:42.821026 master-0 kubenswrapper[16176]: I1203 13:58:42.820992 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 13:58:42.821290 master-0 kubenswrapper[16176]: I1203 13:58:42.821231 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:42.821447 master-0 kubenswrapper[16176]: I1203 13:58:42.821415 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 13:58:42.822081 master-0 kubenswrapper[16176]: I1203 13:58:42.822046 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 13:58:42.825936 master-0 kubenswrapper[16176]: I1203 13:58:42.825872 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 13:58:42.827149 master-0 kubenswrapper[16176]: I1203 13:58:42.827098 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 13:58:42.834953 master-0 kubenswrapper[16176]: I1203 13:58:42.834886 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 13:58:42.835521 master-0 kubenswrapper[16176]: I1203 13:58:42.835472 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 13:58:42.837459 master-0 kubenswrapper[16176]: I1203 13:58:42.837420 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"route-controller-manager-6fcd4b8856-ztns6\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:42.840228 master-0 kubenswrapper[16176]: I1203 13:58:42.840177 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 13:58:42.842636 master-0 kubenswrapper[16176]: I1203 13:58:42.842600 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"redhat-operators-6rjqz\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 13:58:42.843499 master-0 kubenswrapper[16176]: I1203 13:58:42.843451 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 13:58:42.846254 master-0 kubenswrapper[16176]: I1203 13:58:42.845968 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:42.847142 master-0 kubenswrapper[16176]: I1203 13:58:42.847081 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 13:58:42.849534 master-0 kubenswrapper[16176]: I1203 13:58:42.849476 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"community-operators-582c5\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " pod="openshift-marketplace/community-operators-582c5" Dec 03 13:58:42.853190 master-0 kubenswrapper[16176]: I1203 13:58:42.853126 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 13:58:42.854965 master-0 kubenswrapper[16176]: I1203 13:58:42.854907 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 13:58:42.855995 master-0 kubenswrapper[16176]: I1203 13:58:42.855945 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 13:58:42.859744 master-0 kubenswrapper[16176]: I1203 13:58:42.859689 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 13:58:42.860030 master-0 kubenswrapper[16176]: I1203 13:58:42.859857 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"redhat-marketplace-mtm6s\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 13:58:42.860030 master-0 kubenswrapper[16176]: I1203 13:58:42.859936 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 13:58:42.860239 master-0 kubenswrapper[16176]: I1203 13:58:42.860172 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 13:58:42.860785 master-0 kubenswrapper[16176]: I1203 13:58:42.860710 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:42.861474 master-0 kubenswrapper[16176]: I1203 13:58:42.861424 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 13:58:42.862659 master-0 kubenswrapper[16176]: I1203 13:58:42.862578 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 13:58:42.862659 master-0 kubenswrapper[16176]: I1203 13:58:42.862637 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 13:58:42.863545 master-0 kubenswrapper[16176]: I1203 13:58:42.863502 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 13:58:42.865205 master-0 kubenswrapper[16176]: I1203 13:58:42.865163 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 13:58:42.875230 master-0 kubenswrapper[16176]: I1203 13:58:42.875164 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 13:58:42.877585 master-0 kubenswrapper[16176]: I1203 13:58:42.877548 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:42.894807 master-0 kubenswrapper[16176]: I1203 13:58:42.894747 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:42.917699 master-0 kubenswrapper[16176]: I1203 13:58:42.917625 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:42.938080 master-0 kubenswrapper[16176]: I1203 13:58:42.937112 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 13:58:42.947245 master-0 kubenswrapper[16176]: I1203 13:58:42.946728 16176 scope.go:117] "RemoveContainer" containerID="f95667957e520f9243348981a363f18d6f40c97711a492658710a77286524bca" Dec 03 13:58:42.956080 master-0 kubenswrapper[16176]: I1203 13:58:42.955856 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:42.956186 master-0 kubenswrapper[16176]: I1203 13:58:42.956132 16176 request.go:700] Waited for 2.870090382s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token Dec 03 13:58:42.956327 master-0 kubenswrapper[16176]: I1203 13:58:42.956139 16176 scope.go:117] "RemoveContainer" containerID="c5498229c064870000ea3daf72432927db1bd1e50fb18b1e394aaea41976762e" Dec 03 13:58:42.965207 master-0 kubenswrapper[16176]: I1203 13:58:42.965054 16176 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 13:58:42.971656 master-0 kubenswrapper[16176]: I1203 13:58:42.971611 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:58:42.994964 master-0 kubenswrapper[16176]: I1203 13:58:42.994885 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 13:58:43.013925 master-0 kubenswrapper[16176]: I1203 13:58:43.013756 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:43.033363 master-0 kubenswrapper[16176]: I1203 13:58:43.033284 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 13:58:43.055553 master-0 kubenswrapper[16176]: I1203 13:58:43.055472 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 13:58:43.069972 master-0 kubenswrapper[16176]: I1203 13:58:43.069906 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 13:58:43.092379 master-0 kubenswrapper[16176]: I1203 13:58:43.092317 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 13:58:43.118527 master-0 kubenswrapper[16176]: I1203 13:58:43.118451 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"controller-manager-7d8fb964c9-v2h98\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:43.139001 master-0 kubenswrapper[16176]: I1203 13:58:43.138637 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 13:58:43.156424 master-0 kubenswrapper[16176]: I1203 13:58:43.156375 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 13:58:43.175950 master-0 kubenswrapper[16176]: I1203 13:58:43.175861 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 13:58:43.191838 master-0 kubenswrapper[16176]: I1203 13:58:43.191746 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 13:58:43.212972 master-0 kubenswrapper[16176]: I1203 13:58:43.210090 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 13:58:43.242708 master-0 kubenswrapper[16176]: I1203 13:58:43.233677 16176 scope.go:117] "RemoveContainer" containerID="ecdb30fdbb4d4e7e6a5ab2a8c0c78dc966b6766d4fc8dacd3b90e5acf0728097" Dec 03 13:58:43.242708 master-0 kubenswrapper[16176]: I1203 13:58:43.237642 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 13:58:43.257705 master-0 kubenswrapper[16176]: I1203 13:58:43.256901 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 13:58:43.270647 master-0 kubenswrapper[16176]: I1203 13:58:43.270593 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 13:58:43.292232 master-0 kubenswrapper[16176]: I1203 13:58:43.291740 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:43.315527 master-0 kubenswrapper[16176]: I1203 13:58:43.315467 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 13:58:43.339202 master-0 kubenswrapper[16176]: E1203 13:58:43.339140 16176 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 13:58:43.339202 master-0 kubenswrapper[16176]: E1203 13:58:43.339199 16176 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 13:58:43.339414 master-0 kubenswrapper[16176]: E1203 13:58:43.339338 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access podName:50f28c77-b15c-4b86-93c8-221c0cc82bb2 nodeName:}" failed. No retries permitted until 2025-12-03 13:58:43.839309505 +0000 UTC m=+14.264950167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access") pod "installer-2-master-0" (UID: "50f28c77-b15c-4b86-93c8-221c0cc82bb2") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 13:58:43.353881 master-0 kubenswrapper[16176]: I1203 13:58:43.353471 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 13:58:43.375856 master-0 kubenswrapper[16176]: I1203 13:58:43.375764 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 13:58:43.379135 master-0 kubenswrapper[16176]: I1203 13:58:43.379082 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") pod \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\" (UID: \"50f28c77-b15c-4b86-93c8-221c0cc82bb2\") " Dec 03 13:58:43.395583 master-0 kubenswrapper[16176]: I1203 13:58:43.395516 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"machine-approver-5775bfbf6d-vtvbd\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:43.414523 master-0 kubenswrapper[16176]: I1203 13:58:43.414239 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:43.443865 master-0 kubenswrapper[16176]: I1203 13:58:43.443784 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 13:58:43.459597 master-0 kubenswrapper[16176]: I1203 13:58:43.459542 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 13:58:43.469316 master-0 kubenswrapper[16176]: I1203 13:58:43.469197 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "50f28c77-b15c-4b86-93c8-221c0cc82bb2" (UID: "50f28c77-b15c-4b86-93c8-221c0cc82bb2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:58:43.475372 master-0 kubenswrapper[16176]: I1203 13:58:43.475321 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:43.481020 master-0 kubenswrapper[16176]: I1203 13:58:43.480987 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/50f28c77-b15c-4b86-93c8-221c0cc82bb2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:43.491638 master-0 kubenswrapper[16176]: I1203 13:58:43.491593 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:43.514549 master-0 kubenswrapper[16176]: I1203 13:58:43.514451 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:58:43.514817 master-0 kubenswrapper[16176]: E1203 13:58:43.514708 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)\"" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" Dec 03 13:58:43.544306 master-0 kubenswrapper[16176]: I1203 13:58:43.542527 16176 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="13238af3704fe583f617f61e755cf4c2" killPodOptions="" Dec 03 13:58:43.544810 master-0 kubenswrapper[16176]: E1203 13:58:43.544754 16176 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.752s" Dec 03 13:58:43.544954 master-0 kubenswrapper[16176]: I1203 13:58:43.544915 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:43.545026 master-0 kubenswrapper[16176]: I1203 13:58:43.544987 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:43.545061 master-0 kubenswrapper[16176]: I1203 13:58:43.545039 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563405 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563534 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563550 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563565 16176 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="50f3000b-6567-4af2-8ea9-ca37d40ead7a" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563589 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563599 16176 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="50f3000b-6567-4af2-8ea9-ca37d40ead7a" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563627 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563648 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563678 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563690 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:43.563735 master-0 kubenswrapper[16176]: I1203 13:58:43.563726 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563773 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563796 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563825 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563850 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563869 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563921 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563960 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.563983 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564006 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564035 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564069 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564095 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564119 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 13:58:43.564152 master-0 kubenswrapper[16176]: I1203 13:58:43.564145 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:43.564549 master-0 kubenswrapper[16176]: I1203 13:58:43.564171 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:43.564549 master-0 kubenswrapper[16176]: I1203 13:58:43.564192 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5m4f8" Dec 03 13:58:43.564549 master-0 kubenswrapper[16176]: I1203 13:58:43.564216 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:43.564549 master-0 kubenswrapper[16176]: I1203 13:58:43.564242 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:43.574671 master-0 kubenswrapper[16176]: I1203 13:58:43.574604 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 13:58:43.591201 master-0 kubenswrapper[16176]: I1203 13:58:43.590001 16176 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 13:58:43.591201 master-0 kubenswrapper[16176]: I1203 13:58:43.590115 16176 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 13:58:43.614584 master-0 kubenswrapper[16176]: I1203 13:58:43.614510 16176 generic.go:334] "Generic (PLEG): container finished" podID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerID="350dd63506ed1a80d3072decc1cb3b56d8adbc1be35b68890406445d1bd9787c" exitCode=0 Dec 03 13:58:43.614715 master-0 kubenswrapper[16176]: I1203 13:58:43.614616 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rjqz" event={"ID":"03494fce-881e-4eb6-bc3d-570f1d8e7c52","Type":"ContainerDied","Data":"350dd63506ed1a80d3072decc1cb3b56d8adbc1be35b68890406445d1bd9787c"} Dec 03 13:58:43.632833 master-0 kubenswrapper[16176]: I1203 13:58:43.632776 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"e462cc64bf600cb19a14c9eb951e600ebb342aa06ab773db9de1ac6cde86e108"} Dec 03 13:58:43.635936 master-0 kubenswrapper[16176]: I1203 13:58:43.635899 16176 generic.go:334] "Generic (PLEG): container finished" podID="486d4964-18cc-4adc-b82d-b09627cadda4" containerID="80d43f6a35677fbb779e0128bd416d4983486981e8541d1c8f04143c9e02738d" exitCode=0 Dec 03 13:58:43.636112 master-0 kubenswrapper[16176]: I1203 13:58:43.636031 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtm6s" event={"ID":"486d4964-18cc-4adc-b82d-b09627cadda4","Type":"ContainerDied","Data":"80d43f6a35677fbb779e0128bd416d4983486981e8541d1c8f04143c9e02738d"} Dec 03 13:58:43.670064 master-0 kubenswrapper[16176]: I1203 13:58:43.669806 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/1.log" Dec 03 13:58:43.671503 master-0 kubenswrapper[16176]: I1203 13:58:43.671328 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/0.log" Dec 03 13:58:43.671503 master-0 kubenswrapper[16176]: I1203 13:58:43.671478 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c"} Dec 03 13:58:43.691987 master-0 kubenswrapper[16176]: I1203 13:58:43.691929 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" event={"ID":"ecc68b17-9112-471d-89f9-15bf30dfa004","Type":"ContainerStarted","Data":"198e100a87c3ebf3cf56cbb72aee221b10d3d9da6179d5cd2d009567c565ee93"} Dec 03 13:58:43.692362 master-0 kubenswrapper[16176]: I1203 13:58:43.692296 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:43.728747 master-0 kubenswrapper[16176]: I1203 13:58:43.728686 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"85e3ae2d3b2246b8932537de502b184c8858439bc1a9ba3c872c5949cd9e443e"} Dec 03 13:58:43.735521 master-0 kubenswrapper[16176]: I1203 13:58:43.735487 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerStarted","Data":"494b82a3c99dd4e6f68eb31d0be4e4dfb45665eea67b8ef1f7a9ec95d78781cf"} Dec 03 13:58:43.745728 master-0 kubenswrapper[16176]: I1203 13:58:43.745675 16176 scope.go:117] "RemoveContainer" containerID="f6cc1051013eb2653a1f1addc2078de44a82444035227ab36b585d9c55ec78f1" Dec 03 13:58:43.786840 master-0 kubenswrapper[16176]: I1203 13:58:43.784424 16176 scope.go:117] "RemoveContainer" containerID="d559032002ae450f2dcc5a6551686ae528fbdc12019934f45dbbd1835ac0a064" Dec 03 13:58:43.788841 master-0 kubenswrapper[16176]: E1203 13:58:43.788149 16176 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:43.788841 master-0 kubenswrapper[16176]: I1203 13:58:43.788492 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:58:43.788841 master-0 kubenswrapper[16176]: E1203 13:58:43.788739 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)\"" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" Dec 03 13:58:43.832122 master-0 kubenswrapper[16176]: I1203 13:58:43.831223 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13238af3704fe583f617f61e755cf4c2" path="/var/lib/kubelet/pods/13238af3704fe583f617f61e755cf4c2/volumes" Dec 03 13:58:43.910442 master-0 kubenswrapper[16176]: I1203 13:58:43.910381 16176 scope.go:117] "RemoveContainer" containerID="23c11c9c510eb0adf984e6586dd2718268103b8272cd4d15e395e90badd0b5a3" Dec 03 13:58:44.693379 master-0 kubenswrapper[16176]: I1203 13:58:44.693296 16176 patch_prober.go:28] interesting pod/route-controller-manager-6fcd4b8856-ztns6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:58:44.693942 master-0 kubenswrapper[16176]: I1203 13:58:44.693408 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:58:44.776602 master-0 kubenswrapper[16176]: I1203 13:58:44.776515 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerStarted","Data":"726b655b46a6c189da184bba2daa67a8497d7a4a3a7bc1d712341a2a28d72fdf"} Dec 03 13:58:44.783216 master-0 kubenswrapper[16176]: I1203 13:58:44.780173 16176 generic.go:334] "Generic (PLEG): container finished" podID="c95705e3-17ef-40fe-89e8-22586a32621b" containerID="85e3ae2d3b2246b8932537de502b184c8858439bc1a9ba3c872c5949cd9e443e" exitCode=0 Dec 03 13:58:44.783216 master-0 kubenswrapper[16176]: I1203 13:58:44.780231 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerDied","Data":"85e3ae2d3b2246b8932537de502b184c8858439bc1a9ba3c872c5949cd9e443e"} Dec 03 13:58:44.783216 master-0 kubenswrapper[16176]: I1203 13:58:44.780299 16176 scope.go:117] "RemoveContainer" containerID="c5498229c064870000ea3daf72432927db1bd1e50fb18b1e394aaea41976762e" Dec 03 13:58:44.783216 master-0 kubenswrapper[16176]: I1203 13:58:44.780920 16176 scope.go:117] "RemoveContainer" containerID="85e3ae2d3b2246b8932537de502b184c8858439bc1a9ba3c872c5949cd9e443e" Dec 03 13:58:44.783216 master-0 kubenswrapper[16176]: E1203 13:58:44.781129 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-59d99f9b7b-74sss_openshift-insights(c95705e3-17ef-40fe-89e8-22586a32621b)\"" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 13:58:44.787121 master-0 kubenswrapper[16176]: I1203 13:58:44.786920 16176 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="e462cc64bf600cb19a14c9eb951e600ebb342aa06ab773db9de1ac6cde86e108" exitCode=0 Dec 03 13:58:44.787121 master-0 kubenswrapper[16176]: I1203 13:58:44.787021 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"e462cc64bf600cb19a14c9eb951e600ebb342aa06ab773db9de1ac6cde86e108"} Dec 03 13:58:44.790782 master-0 kubenswrapper[16176]: I1203 13:58:44.790645 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/1.log" Dec 03 13:58:44.793143 master-0 kubenswrapper[16176]: I1203 13:58:44.792830 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/0.log" Dec 03 13:58:44.793143 master-0 kubenswrapper[16176]: I1203 13:58:44.792889 16176 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c" exitCode=1 Dec 03 13:58:44.793143 master-0 kubenswrapper[16176]: I1203 13:58:44.792988 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c"} Dec 03 13:58:44.793143 master-0 kubenswrapper[16176]: I1203 13:58:44.793041 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"db27dcf8d44a2c7f1842719b86cb23a142abec21f5f241b9c57e46c810dc5d5e"} Dec 03 13:58:44.798322 master-0 kubenswrapper[16176]: I1203 13:58:44.793956 16176 scope.go:117] "RemoveContainer" containerID="6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c" Dec 03 13:58:44.798322 master-0 kubenswrapper[16176]: E1203 13:58:44.794285 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5fdc576499-j2n8j_openshift-machine-api(690d1f81-7b1f-4fd0-9b6e-154c9687c744)\"" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 13:58:44.801434 master-0 kubenswrapper[16176]: I1203 13:58:44.801340 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5dcaccc5-46b1-4a38-b3af-6839dec529d3","Type":"ContainerStarted","Data":"296cf7d08c2a38ce14296567e7b95dead04de3b7bcb6bac3f6e692cbdb93718e"} Dec 03 13:58:44.805757 master-0 kubenswrapper[16176]: I1203 13:58:44.805723 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" event={"ID":"f5f23b6d-8303-46d8-892e-8e2c01b567b5","Type":"ContainerStarted","Data":"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8"} Dec 03 13:58:44.808830 master-0 kubenswrapper[16176]: I1203 13:58:44.808774 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:58:44.811016 master-0 kubenswrapper[16176]: E1203 13:58:44.808974 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)\"" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" Dec 03 13:58:44.811016 master-0 kubenswrapper[16176]: I1203 13:58:44.810956 16176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:58:44.813317 master-0 kubenswrapper[16176]: I1203 13:58:44.811791 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:44.832526 master-0 kubenswrapper[16176]: I1203 13:58:44.829411 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 13:58:45.203421 master-0 kubenswrapper[16176]: I1203 13:58:45.202938 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:45.812706 master-0 kubenswrapper[16176]: I1203 13:58:45.812612 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:58:45.815890 master-0 kubenswrapper[16176]: E1203 13:58:45.812875 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)\"" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" Dec 03 13:58:45.815890 master-0 kubenswrapper[16176]: I1203 13:58:45.814235 16176 scope.go:117] "RemoveContainer" containerID="6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c" Dec 03 13:58:45.815890 master-0 kubenswrapper[16176]: E1203 13:58:45.814479 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5fdc576499-j2n8j_openshift-machine-api(690d1f81-7b1f-4fd0-9b6e-154c9687c744)\"" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 13:58:45.815890 master-0 kubenswrapper[16176]: I1203 13:58:45.814659 16176 patch_prober.go:28] interesting pod/route-controller-manager-6fcd4b8856-ztns6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 13:58:45.815890 master-0 kubenswrapper[16176]: I1203 13:58:45.814690 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 13:58:46.036702 master-0 kubenswrapper[16176]: I1203 13:58:46.036627 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:58:46.389859 master-0 kubenswrapper[16176]: I1203 13:58:46.389662 16176 scope.go:117] "RemoveContainer" containerID="ecdb30fdbb4d4e7e6a5ab2a8c0c78dc966b6766d4fc8dacd3b90e5acf0728097" Dec 03 13:58:46.777456 master-0 kubenswrapper[16176]: I1203 13:58:46.776987 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd"] Dec 03 13:58:46.777456 master-0 kubenswrapper[16176]: I1203 13:58:46.777349 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="kube-rbac-proxy" containerID="cri-o://c446c9128a6cdcc1b5ae17378b359fd872223fb29994a04233b4de462e78ee58" gracePeriod=30 Dec 03 13:58:46.777798 master-0 kubenswrapper[16176]: I1203 13:58:46.777595 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="machine-approver-controller" containerID="cri-o://8cd048874efe4b30f5f42dd06b85dd1c97db84e3f9ffabe72fd07644d0447417" gracePeriod=30 Dec 03 13:58:46.861839 master-0 kubenswrapper[16176]: I1203 13:58:46.861665 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:58:46.862444 master-0 kubenswrapper[16176]: E1203 13:58:46.861929 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-master-0_openshift-kube-apiserver(69e3deb6aaa7ca82dd236253a197e02b)\"" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" Dec 03 13:58:47.551565 master-0 kubenswrapper[16176]: I1203 13:58:47.551463 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:58:47.554774 master-0 kubenswrapper[16176]: I1203 13:58:47.554733 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-78ddcf56f9-8l84w"] Dec 03 13:58:47.803631 master-0 kubenswrapper[16176]: I1203 13:58:47.802605 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" path="/var/lib/kubelet/pods/63aae3b9-9a72-497e-af01-5d8b8d0ac876/volumes" Dec 03 13:58:47.881053 master-0 kubenswrapper[16176]: I1203 13:58:47.880891 16176 generic.go:334] "Generic (PLEG): container finished" podID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerID="8cd048874efe4b30f5f42dd06b85dd1c97db84e3f9ffabe72fd07644d0447417" exitCode=0 Dec 03 13:58:47.881053 master-0 kubenswrapper[16176]: I1203 13:58:47.880941 16176 generic.go:334] "Generic (PLEG): container finished" podID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerID="c446c9128a6cdcc1b5ae17378b359fd872223fb29994a04233b4de462e78ee58" exitCode=0 Dec 03 13:58:47.881053 master-0 kubenswrapper[16176]: I1203 13:58:47.880972 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerDied","Data":"8cd048874efe4b30f5f42dd06b85dd1c97db84e3f9ffabe72fd07644d0447417"} Dec 03 13:58:47.881053 master-0 kubenswrapper[16176]: I1203 13:58:47.881014 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerDied","Data":"c446c9128a6cdcc1b5ae17378b359fd872223fb29994a04233b4de462e78ee58"} Dec 03 13:58:47.934877 master-0 kubenswrapper[16176]: I1203 13:58:47.934351 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 13:58:48.625152 master-0 kubenswrapper[16176]: I1203 13:58:48.625101 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 13:58:48.660599 master-0 kubenswrapper[16176]: I1203 13:58:48.655243 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh"] Dec 03 13:58:48.661130 master-0 kubenswrapper[16176]: I1203 13:58:48.661056 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="cluster-cloud-controller-manager" containerID="cri-o://8ef0a7e56fbe9931d72ba7b8b024332339ea3e21624a3cc8144f776dec699c05" gracePeriod=30 Dec 03 13:58:48.661204 master-0 kubenswrapper[16176]: I1203 13:58:48.661129 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="config-sync-controllers" containerID="cri-o://494b82a3c99dd4e6f68eb31d0be4e4dfb45665eea67b8ef1f7a9ec95d78781cf" gracePeriod=30 Dec 03 13:58:48.661286 master-0 kubenswrapper[16176]: I1203 13:58:48.661089 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="kube-rbac-proxy" containerID="cri-o://726b655b46a6c189da184bba2daa67a8497d7a4a3a7bc1d712341a2a28d72fdf" gracePeriod=30 Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897064 16176 generic.go:334] "Generic (PLEG): container finished" podID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerID="726b655b46a6c189da184bba2daa67a8497d7a4a3a7bc1d712341a2a28d72fdf" exitCode=0 Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897121 16176 generic.go:334] "Generic (PLEG): container finished" podID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerID="494b82a3c99dd4e6f68eb31d0be4e4dfb45665eea67b8ef1f7a9ec95d78781cf" exitCode=0 Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897131 16176 generic.go:334] "Generic (PLEG): container finished" podID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerID="8ef0a7e56fbe9931d72ba7b8b024332339ea3e21624a3cc8144f776dec699c05" exitCode=0 Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897166 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerDied","Data":"726b655b46a6c189da184bba2daa67a8497d7a4a3a7bc1d712341a2a28d72fdf"} Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897210 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerDied","Data":"494b82a3c99dd4e6f68eb31d0be4e4dfb45665eea67b8ef1f7a9ec95d78781cf"} Dec 03 13:58:48.898605 master-0 kubenswrapper[16176]: I1203 13:58:48.897226 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerDied","Data":"8ef0a7e56fbe9931d72ba7b8b024332339ea3e21624a3cc8144f776dec699c05"} Dec 03 13:58:50.913722 master-0 kubenswrapper[16176]: I1203 13:58:50.913663 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/1.log" Dec 03 13:58:52.952846 master-0 kubenswrapper[16176]: I1203 13:58:52.952780 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 13:58:53.004422 master-0 kubenswrapper[16176]: I1203 13:58:53.004359 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:53.059431 master-0 kubenswrapper[16176]: I1203 13:58:53.059360 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") pod \"69f41c3e-713b-4532-8534-ceefb7f519bf\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " Dec 03 13:58:53.059699 master-0 kubenswrapper[16176]: I1203 13:58:53.059443 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") pod \"69f41c3e-713b-4532-8534-ceefb7f519bf\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " Dec 03 13:58:53.059699 master-0 kubenswrapper[16176]: I1203 13:58:53.059492 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") pod \"69f41c3e-713b-4532-8534-ceefb7f519bf\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " Dec 03 13:58:53.059699 master-0 kubenswrapper[16176]: I1203 13:58:53.059557 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") pod \"69f41c3e-713b-4532-8534-ceefb7f519bf\" (UID: \"69f41c3e-713b-4532-8534-ceefb7f519bf\") " Dec 03 13:58:53.061413 master-0 kubenswrapper[16176]: I1203 13:58:53.061351 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "69f41c3e-713b-4532-8534-ceefb7f519bf" (UID: "69f41c3e-713b-4532-8534-ceefb7f519bf"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:53.061920 master-0 kubenswrapper[16176]: I1203 13:58:53.061864 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config" (OuterVolumeSpecName: "config") pod "69f41c3e-713b-4532-8534-ceefb7f519bf" (UID: "69f41c3e-713b-4532-8534-ceefb7f519bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:58:53.063789 master-0 kubenswrapper[16176]: I1203 13:58:53.063737 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8" (OuterVolumeSpecName: "kube-api-access-2q8g8") pod "69f41c3e-713b-4532-8534-ceefb7f519bf" (UID: "69f41c3e-713b-4532-8534-ceefb7f519bf"). InnerVolumeSpecName "kube-api-access-2q8g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:58:53.063878 master-0 kubenswrapper[16176]: I1203 13:58:53.063842 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "69f41c3e-713b-4532-8534-ceefb7f519bf" (UID: "69f41c3e-713b-4532-8534-ceefb7f519bf"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:58:53.162094 master-0 kubenswrapper[16176]: I1203 13:58:53.161763 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q8g8\" (UniqueName: \"kubernetes.io/projected/69f41c3e-713b-4532-8534-ceefb7f519bf-kube-api-access-2q8g8\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:53.162094 master-0 kubenswrapper[16176]: I1203 13:58:53.161812 16176 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/69f41c3e-713b-4532-8534-ceefb7f519bf-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:53.162094 master-0 kubenswrapper[16176]: I1203 13:58:53.161825 16176 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:53.162094 master-0 kubenswrapper[16176]: I1203 13:58:53.161836 16176 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/69f41c3e-713b-4532-8534-ceefb7f519bf-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:58:53.940567 master-0 kubenswrapper[16176]: I1203 13:58:53.940492 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" event={"ID":"69f41c3e-713b-4532-8534-ceefb7f519bf","Type":"ContainerDied","Data":"4d76c9da0cb38e6568c80306b2ab868ec380bcf051f8ab734abeae2624237c97"} Dec 03 13:58:53.940567 master-0 kubenswrapper[16176]: I1203 13:58:53.940575 16176 scope.go:117] "RemoveContainer" containerID="8cd048874efe4b30f5f42dd06b85dd1c97db84e3f9ffabe72fd07644d0447417" Dec 03 13:58:53.940868 master-0 kubenswrapper[16176]: I1203 13:58:53.940721 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd" Dec 03 13:58:58.444333 master-0 kubenswrapper[16176]: I1203 13:58:58.444174 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:58.445245 master-0 kubenswrapper[16176]: I1203 13:58:58.444553 16176 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 13:58:58.476520 master-0 kubenswrapper[16176]: I1203 13:58:58.476412 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 13:58:58.794250 master-0 kubenswrapper[16176]: I1203 13:58:58.794036 16176 scope.go:117] "RemoveContainer" containerID="85e3ae2d3b2246b8932537de502b184c8858439bc1a9ba3c872c5949cd9e443e" Dec 03 13:58:59.793761 master-0 kubenswrapper[16176]: I1203 13:58:59.793665 16176 scope.go:117] "RemoveContainer" containerID="6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c" Dec 03 13:58:59.794696 master-0 kubenswrapper[16176]: I1203 13:58:59.794013 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 13:59:12.075335 master-0 kubenswrapper[16176]: I1203 13:59:12.075228 16176 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6" exitCode=1 Dec 03 13:59:12.075335 master-0 kubenswrapper[16176]: I1203 13:59:12.075317 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerDied","Data":"b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6"} Dec 03 13:59:12.076024 master-0 kubenswrapper[16176]: I1203 13:59:12.075804 16176 scope.go:117] "RemoveContainer" containerID="b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6" Dec 03 13:59:12.677327 master-0 kubenswrapper[16176]: I1203 13:59:12.675695 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd"] Dec 03 13:59:12.705937 master-0 kubenswrapper[16176]: I1203 13:59:12.704943 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5775bfbf6d-vtvbd"] Dec 03 13:59:13.800917 master-0 kubenswrapper[16176]: I1203 13:59:13.800829 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" path="/var/lib/kubelet/pods/69f41c3e-713b-4532-8534-ceefb7f519bf/volumes" Dec 03 13:59:14.362517 master-0 kubenswrapper[16176]: I1203 13:59:14.361582 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 13:59:15.234669 master-0 kubenswrapper[16176]: I1203 13:59:15.234576 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 13:59:20.643647 master-0 kubenswrapper[16176]: I1203 13:59:20.643585 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-582c5" Dec 03 13:59:20.659675 master-0 kubenswrapper[16176]: I1203 13:59:20.658921 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:59:20.688807 master-0 kubenswrapper[16176]: I1203 13:59:20.688293 16176 scope.go:117] "RemoveContainer" containerID="c446c9128a6cdcc1b5ae17378b359fd872223fb29994a04233b4de462e78ee58" Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.783681 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") pod \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.783753 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") pod \"1efcc24c-87bf-48cd-83b5-196c661a2517\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.783881 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") pod \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.783961 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") pod \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.783993 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") pod \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784022 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") pod \"1efcc24c-87bf-48cd-83b5-196c661a2517\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784040 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") pod \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\" (UID: \"85820c13-e5cf-4af1-bd1c-dd74ea151cac\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784071 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") pod \"1efcc24c-87bf-48cd-83b5-196c661a2517\" (UID: \"1efcc24c-87bf-48cd-83b5-196c661a2517\") " Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784409 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "85820c13-e5cf-4af1-bd1c-dd74ea151cac" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784575 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1efcc24c-87bf-48cd-83b5-196c661a2517" (UID: "1efcc24c-87bf-48cd-83b5-196c661a2517"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784585 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "85820c13-e5cf-4af1-bd1c-dd74ea151cac" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 13:59:20.784977 master-0 kubenswrapper[16176]: I1203 13:59:20.784880 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities" (OuterVolumeSpecName: "utilities") pod "1efcc24c-87bf-48cd-83b5-196c661a2517" (UID: "1efcc24c-87bf-48cd-83b5-196c661a2517"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 13:59:20.785761 master-0 kubenswrapper[16176]: I1203 13:59:20.785078 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images" (OuterVolumeSpecName: "images") pod "85820c13-e5cf-4af1-bd1c-dd74ea151cac" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 13:59:20.788638 master-0 kubenswrapper[16176]: I1203 13:59:20.788573 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "85820c13-e5cf-4af1-bd1c-dd74ea151cac" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 13:59:20.789000 master-0 kubenswrapper[16176]: I1203 13:59:20.788937 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl" (OuterVolumeSpecName: "kube-api-access-whkbl") pod "1efcc24c-87bf-48cd-83b5-196c661a2517" (UID: "1efcc24c-87bf-48cd-83b5-196c661a2517"). InnerVolumeSpecName "kube-api-access-whkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:59:20.810629 master-0 kubenswrapper[16176]: I1203 13:59:20.810538 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj" (OuterVolumeSpecName: "kube-api-access-dwmrj") pod "85820c13-e5cf-4af1-bd1c-dd74ea151cac" (UID: "85820c13-e5cf-4af1-bd1c-dd74ea151cac"). InnerVolumeSpecName "kube-api-access-dwmrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 13:59:20.846031 master-0 kubenswrapper[16176]: I1203 13:59:20.842959 16176 scope.go:117] "RemoveContainer" containerID="95ec319b339653ca571700fe578152f846441f95a9d1ddba3842062da1d7721c" Dec 03 13:59:20.885347 master-0 kubenswrapper[16176]: I1203 13:59:20.885242 16176 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/85820c13-e5cf-4af1-bd1c-dd74ea151cac-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885347 master-0 kubenswrapper[16176]: I1203 13:59:20.885345 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwmrj\" (UniqueName: \"kubernetes.io/projected/85820c13-e5cf-4af1-bd1c-dd74ea151cac-kube-api-access-dwmrj\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885356 16176 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-images\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885368 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whkbl\" (UniqueName: \"kubernetes.io/projected/1efcc24c-87bf-48cd-83b5-196c661a2517-kube-api-access-whkbl\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885379 16176 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/85820c13-e5cf-4af1-bd1c-dd74ea151cac-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885393 16176 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885402 16176 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85820c13-e5cf-4af1-bd1c-dd74ea151cac-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:20.885541 master-0 kubenswrapper[16176]: I1203 13:59:20.885431 16176 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcc24c-87bf-48cd-83b5-196c661a2517-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 13:59:21.273739 master-0 kubenswrapper[16176]: I1203 13:59:21.267822 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/1.log" Dec 03 13:59:21.273739 master-0 kubenswrapper[16176]: I1203 13:59:21.267919 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"668d20dac70369b169c73554528623b6d50616a63ade734b8e38b044fe1f5b5c"} Dec 03 13:59:21.283777 master-0 kubenswrapper[16176]: I1203 13:59:21.278826 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-582c5" event={"ID":"1efcc24c-87bf-48cd-83b5-196c661a2517","Type":"ContainerDied","Data":"baf8480d9e2390e6727c0d4fc8ed3cdbe4111310f815a1aee6d6f586fad1452c"} Dec 03 13:59:21.283777 master-0 kubenswrapper[16176]: I1203 13:59:21.279134 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-582c5" Dec 03 13:59:21.325040 master-0 kubenswrapper[16176]: I1203 13:59:21.324610 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" event={"ID":"85820c13-e5cf-4af1-bd1c-dd74ea151cac","Type":"ContainerDied","Data":"5d90b6c1cac625bbda93e316c6ee64e966db5b6a1d0df50bfab24aaf6e8f87d2"} Dec 03 13:59:21.325040 master-0 kubenswrapper[16176]: I1203 13:59:21.324688 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh" Dec 03 13:59:21.325040 master-0 kubenswrapper[16176]: I1203 13:59:21.324714 16176 scope.go:117] "RemoveContainer" containerID="726b655b46a6c189da184bba2daa67a8497d7a4a3a7bc1d712341a2a28d72fdf" Dec 03 13:59:21.392425 master-0 kubenswrapper[16176]: I1203 13:59:21.392346 16176 scope.go:117] "RemoveContainer" containerID="494b82a3c99dd4e6f68eb31d0be4e4dfb45665eea67b8ef1f7a9ec95d78781cf" Dec 03 13:59:21.442640 master-0 kubenswrapper[16176]: I1203 13:59:21.441993 16176 scope.go:117] "RemoveContainer" containerID="8ef0a7e56fbe9931d72ba7b8b024332339ea3e21624a3cc8144f776dec699c05" Dec 03 13:59:21.494366 master-0 kubenswrapper[16176]: I1203 13:59:21.494307 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:59:21.494633 master-0 kubenswrapper[16176]: I1203 13:59:21.494622 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-582c5"] Dec 03 13:59:21.501901 master-0 kubenswrapper[16176]: I1203 13:59:21.501531 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh"] Dec 03 13:59:21.515495 master-0 kubenswrapper[16176]: I1203 13:59:21.515418 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-252sh"] Dec 03 13:59:21.813290 master-0 kubenswrapper[16176]: I1203 13:59:21.809209 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efcc24c-87bf-48cd-83b5-196c661a2517" path="/var/lib/kubelet/pods/1efcc24c-87bf-48cd-83b5-196c661a2517/volumes" Dec 03 13:59:21.813290 master-0 kubenswrapper[16176]: I1203 13:59:21.810086 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" path="/var/lib/kubelet/pods/85820c13-e5cf-4af1-bd1c-dd74ea151cac/volumes" Dec 03 13:59:21.844294 master-0 kubenswrapper[16176]: E1203 13:59:21.840394 16176 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff18a80_0b0f_40ab_862e_e8b1ab32040a.slice/crio-b2796ceb45d5376a9c1436ab85eef38efef9b9b4b65ca013497bc820acb912db.scope\": RecentStats: unable to find data in memory cache]" Dec 03 13:59:22.406980 master-0 kubenswrapper[16176]: I1203 13:59:22.406913 16176 generic.go:334] "Generic (PLEG): container finished" podID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerID="4f368ced74c7ac35e1fccb2fb30f1dda38e6b13a287e69e834bd999044d39899" exitCode=0 Dec 03 13:59:22.407290 master-0 kubenswrapper[16176]: I1203 13:59:22.407035 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rjqz" event={"ID":"03494fce-881e-4eb6-bc3d-570f1d8e7c52","Type":"ContainerDied","Data":"4f368ced74c7ac35e1fccb2fb30f1dda38e6b13a287e69e834bd999044d39899"} Dec 03 13:59:22.412215 master-0 kubenswrapper[16176]: I1203 13:59:22.412151 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"592afbc8ad5768dc17fb9b4954572832dfc48ca07ff7ac0a602707299294e300"} Dec 03 13:59:22.417723 master-0 kubenswrapper[16176]: I1203 13:59:22.417626 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"d78739a7694769882b7e47ea5ac08a10","Type":"ContainerStarted","Data":"110755eec31f033e8a23c335b60f91bfebf8427b1bb242510e5222a12558cd35"} Dec 03 13:59:22.420402 master-0 kubenswrapper[16176]: I1203 13:59:22.420335 16176 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="ce3fce98f20c580eec28a8c65a709bb757262ee20e42a95027fc5806473fd5fc" exitCode=0 Dec 03 13:59:22.420573 master-0 kubenswrapper[16176]: I1203 13:59:22.420439 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"ce3fce98f20c580eec28a8c65a709bb757262ee20e42a95027fc5806473fd5fc"} Dec 03 13:59:22.423969 master-0 kubenswrapper[16176]: I1203 13:59:22.423933 16176 generic.go:334] "Generic (PLEG): container finished" podID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerID="320614e39046999a3e824a76f7895c77c09541a21f3ed53189cb02ca9eb7cde8" exitCode=0 Dec 03 13:59:22.424076 master-0 kubenswrapper[16176]: I1203 13:59:22.424033 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerDied","Data":"320614e39046999a3e824a76f7895c77c09541a21f3ed53189cb02ca9eb7cde8"} Dec 03 13:59:22.429776 master-0 kubenswrapper[16176]: I1203 13:59:22.429696 16176 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="b2796ceb45d5376a9c1436ab85eef38efef9b9b4b65ca013497bc820acb912db" exitCode=0 Dec 03 13:59:22.429895 master-0 kubenswrapper[16176]: I1203 13:59:22.429820 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"b2796ceb45d5376a9c1436ab85eef38efef9b9b4b65ca013497bc820acb912db"} Dec 03 13:59:22.444841 master-0 kubenswrapper[16176]: I1203 13:59:22.444772 16176 generic.go:334] "Generic (PLEG): container finished" podID="486d4964-18cc-4adc-b82d-b09627cadda4" containerID="6caa3e9b54572938352da0c64d4d4266814be9f6d091ed823f9d39e5bb9a0cb3" exitCode=0 Dec 03 13:59:22.445007 master-0 kubenswrapper[16176]: I1203 13:59:22.444889 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtm6s" event={"ID":"486d4964-18cc-4adc-b82d-b09627cadda4","Type":"ContainerDied","Data":"6caa3e9b54572938352da0c64d4d4266814be9f6d091ed823f9d39e5bb9a0cb3"} Dec 03 13:59:22.457485 master-0 kubenswrapper[16176]: I1203 13:59:22.455651 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/1.log" Dec 03 13:59:22.463991 master-0 kubenswrapper[16176]: I1203 13:59:22.461500 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"69e3deb6aaa7ca82dd236253a197e02b","Type":"ContainerStarted","Data":"e666b3a3d526b049a61d7c6d5b53f263418641d81cb64ea04c25d2f6f4646153"} Dec 03 13:59:22.463991 master-0 kubenswrapper[16176]: I1203 13:59:22.463979 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:59:28.505891 master-0 kubenswrapper[16176]: I1203 13:59:28.505771 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=45.505739171 podStartE2EDuration="45.505739171s" podCreationTimestamp="2025-12-03 13:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 13:59:28.502695822 +0000 UTC m=+58.928336494" watchObservedRunningTime="2025-12-03 13:59:28.505739171 +0000 UTC m=+58.931379833" Dec 03 13:59:29.751230 master-0 kubenswrapper[16176]: I1203 13:59:29.751176 16176 scope.go:117] "RemoveContainer" containerID="c2910945f4e5ce5261fb54c997fa1eefdac85619b597882bb72810532ef0b541" Dec 03 13:59:34.565888 master-0 kubenswrapper[16176]: I1203 13:59:34.565776 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mtm6s_486d4964-18cc-4adc-b82d-b09627cadda4/extract-utilities/0.log" Dec 03 13:59:34.565888 master-0 kubenswrapper[16176]: I1203 13:59:34.565837 16176 generic.go:334] "Generic (PLEG): container finished" podID="486d4964-18cc-4adc-b82d-b09627cadda4" containerID="f596a1bc0f9c9e7a07c93dda12a13c51aefa8cfec2b06195aa8b024987f8e82d" exitCode=137 Dec 03 13:59:34.565888 master-0 kubenswrapper[16176]: I1203 13:59:34.565875 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtm6s" event={"ID":"486d4964-18cc-4adc-b82d-b09627cadda4","Type":"ContainerDied","Data":"f596a1bc0f9c9e7a07c93dda12a13c51aefa8cfec2b06195aa8b024987f8e82d"} Dec 03 13:59:34.565888 master-0 kubenswrapper[16176]: I1203 13:59:34.565923 16176 scope.go:117] "RemoveContainer" containerID="f596a1bc0f9c9e7a07c93dda12a13c51aefa8cfec2b06195aa8b024987f8e82d" Dec 03 13:59:36.042960 master-0 kubenswrapper[16176]: I1203 13:59:36.042859 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 13:59:44.345578 master-0 kubenswrapper[16176]: I1203 13:59:44.345460 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 13:59:44.922921 master-0 kubenswrapper[16176]: I1203 13:59:44.922832 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 13:59:45.663294 master-0 kubenswrapper[16176]: I1203 13:59:45.662777 16176 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b" exitCode=1 Dec 03 13:59:45.663294 master-0 kubenswrapper[16176]: I1203 13:59:45.662829 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b"} Dec 03 13:59:45.664021 master-0 kubenswrapper[16176]: I1203 13:59:45.663461 16176 scope.go:117] "RemoveContainer" containerID="d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b" Dec 03 13:59:53.209432 master-0 kubenswrapper[16176]: I1203 13:59:53.209347 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:59:54.345113 master-0 kubenswrapper[16176]: I1203 13:59:54.344924 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:59:54.516858 master-0 kubenswrapper[16176]: E1203 13:59:54.516745 16176 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 13:59:54.922029 master-0 kubenswrapper[16176]: I1203 13:59:54.921950 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 13:59:57.585141 master-0 kubenswrapper[16176]: I1203 13:59:57.585084 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:00:07.599124 master-0 kubenswrapper[16176]: E1203 14:00:07.598996 16176 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T13:59:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:00:08.689047 master-0 kubenswrapper[16176]: I1203 14:00:08.688974 16176 scope.go:117] "RemoveContainer" containerID="bb1142c90b30cfc73cbe20d0170a4454a2d1e69af5a0227f242575978bf1302c" Dec 03 14:00:08.717313 master-0 kubenswrapper[16176]: I1203 14:00:08.711544 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 14:00:08.763536 master-0 kubenswrapper[16176]: I1203 14:00:08.763507 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 14:00:08.818312 master-0 kubenswrapper[16176]: I1203 14:00:08.818267 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") pod \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " Dec 03 14:00:08.818563 master-0 kubenswrapper[16176]: I1203 14:00:08.818537 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") pod \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " Dec 03 14:00:08.818747 master-0 kubenswrapper[16176]: I1203 14:00:08.818723 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") pod \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\" (UID: \"03494fce-881e-4eb6-bc3d-570f1d8e7c52\") " Dec 03 14:00:08.823020 master-0 kubenswrapper[16176]: I1203 14:00:08.822946 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities" (OuterVolumeSpecName: "utilities") pod "03494fce-881e-4eb6-bc3d-570f1d8e7c52" (UID: "03494fce-881e-4eb6-bc3d-570f1d8e7c52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:00:08.824366 master-0 kubenswrapper[16176]: I1203 14:00:08.824289 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw" (OuterVolumeSpecName: "kube-api-access-6k2bw") pod "03494fce-881e-4eb6-bc3d-570f1d8e7c52" (UID: "03494fce-881e-4eb6-bc3d-570f1d8e7c52"). InnerVolumeSpecName "kube-api-access-6k2bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:08.858054 master-0 kubenswrapper[16176]: I1203 14:00:08.857978 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rjqz" Dec 03 14:00:08.858535 master-0 kubenswrapper[16176]: I1203 14:00:08.857979 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rjqz" event={"ID":"03494fce-881e-4eb6-bc3d-570f1d8e7c52","Type":"ContainerDied","Data":"99aab5d6addd41c622154cc6f270a6df7b17355eeaee15a1257331779d37b167"} Dec 03 14:00:08.860165 master-0 kubenswrapper[16176]: I1203 14:00:08.860134 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtm6s" Dec 03 14:00:08.860319 master-0 kubenswrapper[16176]: I1203 14:00:08.860116 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtm6s" event={"ID":"486d4964-18cc-4adc-b82d-b09627cadda4","Type":"ContainerDied","Data":"9224545b3d2efd569b43fb151a9affc7477ae0dec7b5095fa652c9ed4f6558a3"} Dec 03 14:00:08.920552 master-0 kubenswrapper[16176]: I1203 14:00:08.920498 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") pod \"486d4964-18cc-4adc-b82d-b09627cadda4\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " Dec 03 14:00:08.920749 master-0 kubenswrapper[16176]: I1203 14:00:08.920597 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") pod \"486d4964-18cc-4adc-b82d-b09627cadda4\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " Dec 03 14:00:08.920749 master-0 kubenswrapper[16176]: I1203 14:00:08.920723 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") pod \"486d4964-18cc-4adc-b82d-b09627cadda4\" (UID: \"486d4964-18cc-4adc-b82d-b09627cadda4\") " Dec 03 14:00:08.921388 master-0 kubenswrapper[16176]: I1203 14:00:08.921142 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k2bw\" (UniqueName: \"kubernetes.io/projected/03494fce-881e-4eb6-bc3d-570f1d8e7c52-kube-api-access-6k2bw\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:08.921388 master-0 kubenswrapper[16176]: I1203 14:00:08.921162 16176 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:08.921780 master-0 kubenswrapper[16176]: I1203 14:00:08.921610 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities" (OuterVolumeSpecName: "utilities") pod "486d4964-18cc-4adc-b82d-b09627cadda4" (UID: "486d4964-18cc-4adc-b82d-b09627cadda4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:00:08.923715 master-0 kubenswrapper[16176]: I1203 14:00:08.923685 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4" (OuterVolumeSpecName: "kube-api-access-m4pd4") pod "486d4964-18cc-4adc-b82d-b09627cadda4" (UID: "486d4964-18cc-4adc-b82d-b09627cadda4"). InnerVolumeSpecName "kube-api-access-m4pd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:08.934093 master-0 kubenswrapper[16176]: I1203 14:00:08.934048 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03494fce-881e-4eb6-bc3d-570f1d8e7c52" (UID: "03494fce-881e-4eb6-bc3d-570f1d8e7c52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:00:08.944803 master-0 kubenswrapper[16176]: I1203 14:00:08.944591 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "486d4964-18cc-4adc-b82d-b09627cadda4" (UID: "486d4964-18cc-4adc-b82d-b09627cadda4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:00:09.023171 master-0 kubenswrapper[16176]: I1203 14:00:09.023105 16176 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:09.023171 master-0 kubenswrapper[16176]: I1203 14:00:09.023153 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4pd4\" (UniqueName: \"kubernetes.io/projected/486d4964-18cc-4adc-b82d-b09627cadda4-kube-api-access-m4pd4\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:09.023171 master-0 kubenswrapper[16176]: I1203 14:00:09.023165 16176 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/486d4964-18cc-4adc-b82d-b09627cadda4-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:09.023171 master-0 kubenswrapper[16176]: I1203 14:00:09.023175 16176 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03494fce-881e-4eb6-bc3d-570f1d8e7c52-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:09.850906 master-0 kubenswrapper[16176]: I1203 14:00:09.850542 16176 scope.go:117] "RemoveContainer" containerID="4f368ced74c7ac35e1fccb2fb30f1dda38e6b13a287e69e834bd999044d39899" Dec 03 14:00:09.884060 master-0 kubenswrapper[16176]: I1203 14:00:09.883996 16176 scope.go:117] "RemoveContainer" containerID="350dd63506ed1a80d3072decc1cb3b56d8adbc1be35b68890406445d1bd9787c" Dec 03 14:00:09.900891 master-0 kubenswrapper[16176]: I1203 14:00:09.900845 16176 scope.go:117] "RemoveContainer" containerID="6caa3e9b54572938352da0c64d4d4266814be9f6d091ed823f9d39e5bb9a0cb3" Dec 03 14:00:09.922197 master-0 kubenswrapper[16176]: I1203 14:00:09.922147 16176 scope.go:117] "RemoveContainer" containerID="80d43f6a35677fbb779e0128bd416d4983486981e8541d1c8f04143c9e02738d" Dec 03 14:00:10.709698 master-0 kubenswrapper[16176]: I1203 14:00:10.709639 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq"] Dec 03 14:00:10.710636 master-0 kubenswrapper[16176]: E1203 14:00:10.710611 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 14:00:10.710751 master-0 kubenswrapper[16176]: I1203 14:00:10.710736 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 14:00:10.710881 master-0 kubenswrapper[16176]: E1203 14:00:10.710865 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 14:00:10.710979 master-0 kubenswrapper[16176]: I1203 14:00:10.710965 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 14:00:10.711076 master-0 kubenswrapper[16176]: E1203 14:00:10.711061 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="kube-rbac-proxy" Dec 03 14:00:10.711174 master-0 kubenswrapper[16176]: I1203 14:00:10.711159 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="kube-rbac-proxy" Dec 03 14:00:10.711277 master-0 kubenswrapper[16176]: E1203 14:00:10.711244 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" containerName="installer" Dec 03 14:00:10.711359 master-0 kubenswrapper[16176]: I1203 14:00:10.711345 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" containerName="installer" Dec 03 14:00:10.711454 master-0 kubenswrapper[16176]: E1203 14:00:10.711439 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="cluster-cloud-controller-manager" Dec 03 14:00:10.711539 master-0 kubenswrapper[16176]: I1203 14:00:10.711524 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="cluster-cloud-controller-manager" Dec 03 14:00:10.711631 master-0 kubenswrapper[16176]: E1203 14:00:10.711617 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 14:00:10.711713 master-0 kubenswrapper[16176]: I1203 14:00:10.711700 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 14:00:10.711834 master-0 kubenswrapper[16176]: E1203 14:00:10.711818 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 14:00:10.711965 master-0 kubenswrapper[16176]: I1203 14:00:10.711950 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 14:00:10.712050 master-0 kubenswrapper[16176]: E1203 14:00:10.712036 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:00:10.712136 master-0 kubenswrapper[16176]: I1203 14:00:10.712123 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:00:10.712247 master-0 kubenswrapper[16176]: E1203 14:00:10.712231 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="kube-rbac-proxy" Dec 03 14:00:10.712368 master-0 kubenswrapper[16176]: I1203 14:00:10.712351 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="kube-rbac-proxy" Dec 03 14:00:10.712477 master-0 kubenswrapper[16176]: E1203 14:00:10.712460 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 14:00:10.712562 master-0 kubenswrapper[16176]: I1203 14:00:10.712547 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 14:00:10.712659 master-0 kubenswrapper[16176]: E1203 14:00:10.712641 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerName="extract-utilities" Dec 03 14:00:10.712754 master-0 kubenswrapper[16176]: I1203 14:00:10.712739 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerName="extract-utilities" Dec 03 14:00:10.712971 master-0 kubenswrapper[16176]: E1203 14:00:10.712954 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-utilities" Dec 03 14:00:10.713064 master-0 kubenswrapper[16176]: I1203 14:00:10.713049 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-utilities" Dec 03 14:00:10.713149 master-0 kubenswrapper[16176]: E1203 14:00:10.713135 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="multus-admission-controller" Dec 03 14:00:10.713221 master-0 kubenswrapper[16176]: I1203 14:00:10.713209 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="multus-admission-controller" Dec 03 14:00:10.713328 master-0 kubenswrapper[16176]: E1203 14:00:10.713311 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 14:00:10.713413 master-0 kubenswrapper[16176]: I1203 14:00:10.713399 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 14:00:10.713521 master-0 kubenswrapper[16176]: E1203 14:00:10.713506 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 14:00:10.713606 master-0 kubenswrapper[16176]: I1203 14:00:10.713592 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 14:00:10.713700 master-0 kubenswrapper[16176]: E1203 14:00:10.713685 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 14:00:10.713801 master-0 kubenswrapper[16176]: I1203 14:00:10.713785 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 14:00:10.713908 master-0 kubenswrapper[16176]: E1203 14:00:10.713892 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-content" Dec 03 14:00:10.713988 master-0 kubenswrapper[16176]: I1203 14:00:10.713974 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-content" Dec 03 14:00:10.714080 master-0 kubenswrapper[16176]: E1203 14:00:10.714066 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="config-sync-controllers" Dec 03 14:00:10.714187 master-0 kubenswrapper[16176]: I1203 14:00:10.714171 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="config-sync-controllers" Dec 03 14:00:10.714300 master-0 kubenswrapper[16176]: E1203 14:00:10.714283 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-utilities" Dec 03 14:00:10.714388 master-0 kubenswrapper[16176]: I1203 14:00:10.714374 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-utilities" Dec 03 14:00:10.714482 master-0 kubenswrapper[16176]: E1203 14:00:10.714466 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerName="extract-content" Dec 03 14:00:10.714569 master-0 kubenswrapper[16176]: I1203 14:00:10.714554 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerName="extract-content" Dec 03 14:00:10.714658 master-0 kubenswrapper[16176]: E1203 14:00:10.714644 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="machine-approver-controller" Dec 03 14:00:10.714747 master-0 kubenswrapper[16176]: I1203 14:00:10.714729 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="machine-approver-controller" Dec 03 14:00:10.714831 master-0 kubenswrapper[16176]: E1203 14:00:10.714816 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 14:00:10.714917 master-0 kubenswrapper[16176]: I1203 14:00:10.714902 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 14:00:10.714998 master-0 kubenswrapper[16176]: E1203 14:00:10.714983 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="kube-rbac-proxy" Dec 03 14:00:10.715083 master-0 kubenswrapper[16176]: I1203 14:00:10.715069 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="kube-rbac-proxy" Dec 03 14:00:10.715440 master-0 kubenswrapper[16176]: I1203 14:00:10.715418 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver" Dec 03 14:00:10.715573 master-0 kubenswrapper[16176]: I1203 14:00:10.715557 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="multus-admission-controller" Dec 03 14:00:10.715663 master-0 kubenswrapper[16176]: I1203 14:00:10.715649 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee6150c4-22d1-465b-a934-74d5e197d646" containerName="installer" Dec 03 14:00:10.715764 master-0 kubenswrapper[16176]: I1203 14:00:10.715750 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="9477082b-005c-4ff5-812a-7c3230f60da2" containerName="installer" Dec 03 14:00:10.715849 master-0 kubenswrapper[16176]: I1203 14:00:10.715836 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="config-sync-controllers" Dec 03 14:00:10.715990 master-0 kubenswrapper[16176]: I1203 14:00:10.715975 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="kube-rbac-proxy" Dec 03 14:00:10.716096 master-0 kubenswrapper[16176]: I1203 14:00:10.716079 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-utilities" Dec 03 14:00:10.716180 master-0 kubenswrapper[16176]: I1203 14:00:10.716167 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="725fa88d-f29d-4dee-bfba-6e1c4506f73c" containerName="installer" Dec 03 14:00:10.716284 master-0 kubenswrapper[16176]: I1203 14:00:10.716252 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfe6ad9-3234-47eb-8512-87dd87f7b3a6" containerName="installer" Dec 03 14:00:10.716372 master-0 kubenswrapper[16176]: I1203 14:00:10.716358 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eae43c1-ef3e-4175-8f95-220e490e3017" containerName="prober" Dec 03 14:00:10.716450 master-0 kubenswrapper[16176]: I1203 14:00:10.716435 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="cluster-cloud-controller-manager" Dec 03 14:00:10.716539 master-0 kubenswrapper[16176]: I1203 14:00:10.716525 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f41c3e-713b-4532-8534-ceefb7f519bf" containerName="machine-approver-controller" Dec 03 14:00:10.716624 master-0 kubenswrapper[16176]: I1203 14:00:10.716611 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="63aae3b9-9a72-497e-af01-5d8b8d0ac876" containerName="kube-rbac-proxy" Dec 03 14:00:10.716715 master-0 kubenswrapper[16176]: I1203 14:00:10.716702 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="85820c13-e5cf-4af1-bd1c-dd74ea151cac" containerName="kube-rbac-proxy" Dec 03 14:00:10.716795 master-0 kubenswrapper[16176]: I1203 14:00:10.716782 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" containerName="extract-content" Dec 03 14:00:10.716878 master-0 kubenswrapper[16176]: I1203 14:00:10.716865 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="setup" Dec 03 14:00:10.716967 master-0 kubenswrapper[16176]: I1203 14:00:10.716953 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="9afa5e14-6832-4650-9401-97359c445e61" containerName="assisted-installer-controller" Dec 03 14:00:10.717053 master-0 kubenswrapper[16176]: I1203 14:00:10.717040 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="13238af3704fe583f617f61e755cf4c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:00:10.717132 master-0 kubenswrapper[16176]: I1203 14:00:10.717119 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f28c77-b15c-4b86-93c8-221c0cc82bb2" containerName="installer" Dec 03 14:00:10.717219 master-0 kubenswrapper[16176]: I1203 14:00:10.717205 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" containerName="extract-content" Dec 03 14:00:10.717333 master-0 kubenswrapper[16176]: I1203 14:00:10.717319 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce26e464-9a7c-4b22-a2b4-03706b351455" containerName="cluster-version-operator" Dec 03 14:00:10.718500 master-0 kubenswrapper[16176]: I1203 14:00:10.718468 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6z4sc"] Dec 03 14:00:10.718757 master-0 kubenswrapper[16176]: I1203 14:00:10.718714 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.721372 master-0 kubenswrapper[16176]: I1203 14:00:10.721075 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:00:10.722775 master-0 kubenswrapper[16176]: I1203 14:00:10.722741 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w"] Dec 03 14:00:10.723014 master-0 kubenswrapper[16176]: I1203 14:00:10.722981 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.723755 master-0 kubenswrapper[16176]: I1203 14:00:10.723730 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ddwmn"] Dec 03 14:00:10.723928 master-0 kubenswrapper[16176]: I1203 14:00:10.723881 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.724957 master-0 kubenswrapper[16176]: I1203 14:00:10.724931 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r"] Dec 03 14:00:10.725185 master-0 kubenswrapper[16176]: I1203 14:00:10.725126 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.725666 master-0 kubenswrapper[16176]: I1203 14:00:10.725600 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:00:10.725739 master-0 kubenswrapper[16176]: I1203 14:00:10.725680 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:00:10.725817 master-0 kubenswrapper[16176]: I1203 14:00:10.725616 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:00:10.725873 master-0 kubenswrapper[16176]: I1203 14:00:10.725860 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:00:10.725915 master-0 kubenswrapper[16176]: I1203 14:00:10.725616 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2fgkw" Dec 03 14:00:10.726473 master-0 kubenswrapper[16176]: I1203 14:00:10.726447 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.730621 master-0 kubenswrapper[16176]: I1203 14:00:10.730143 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:00:10.730621 master-0 kubenswrapper[16176]: I1203 14:00:10.730165 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:00:10.730621 master-0 kubenswrapper[16176]: I1203 14:00:10.730174 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:00:10.730621 master-0 kubenswrapper[16176]: I1203 14:00:10.730183 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-n8h5v" Dec 03 14:00:10.731816 master-0 kubenswrapper[16176]: I1203 14:00:10.731783 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 14:00:10.732149 master-0 kubenswrapper[16176]: I1203 14:00:10.732125 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:00:10.732588 master-0 kubenswrapper[16176]: I1203 14:00:10.732564 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:00:10.733427 master-0 kubenswrapper[16176]: I1203 14:00:10.733394 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-cb9jg" Dec 03 14:00:10.734229 master-0 kubenswrapper[16176]: I1203 14:00:10.734186 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:00:10.737747 master-0 kubenswrapper[16176]: I1203 14:00:10.735021 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:00:10.852869 master-0 kubenswrapper[16176]: I1203 14:00:10.852735 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.852914 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853032 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853073 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853100 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853141 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853323 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853496 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.853569 master-0 kubenswrapper[16176]: I1203 14:00:10.853528 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.853860 master-0 kubenswrapper[16176]: I1203 14:00:10.853657 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.853860 master-0 kubenswrapper[16176]: I1203 14:00:10.853727 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.853860 master-0 kubenswrapper[16176]: I1203 14:00:10.853777 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.853860 master-0 kubenswrapper[16176]: I1203 14:00:10.853848 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.853980 master-0 kubenswrapper[16176]: I1203 14:00:10.853897 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.853980 master-0 kubenswrapper[16176]: I1203 14:00:10.853965 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.854047 master-0 kubenswrapper[16176]: I1203 14:00:10.853995 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.854245 master-0 kubenswrapper[16176]: I1203 14:00:10.854157 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.854342 master-0 kubenswrapper[16176]: I1203 14:00:10.854307 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.885359 master-0 kubenswrapper[16176]: I1203 14:00:10.884940 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc"} Dec 03 14:00:10.955913 master-0 kubenswrapper[16176]: I1203 14:00:10.955812 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.955913 master-0 kubenswrapper[16176]: I1203 14:00:10.955929 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956045 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956098 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956135 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956173 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956211 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956246 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956304 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.956340 master-0 kubenswrapper[16176]: I1203 14:00:10.956340 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956368 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956397 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956422 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956448 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956479 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956511 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956535 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956578 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956703 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.956813 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.957205 master-0 kubenswrapper[16176]: I1203 14:00:10.957123 16176 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 14:00:10.957679 master-0 kubenswrapper[16176]: I1203 14:00:10.957326 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.957679 master-0 kubenswrapper[16176]: I1203 14:00:10.957598 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.958098 master-0 kubenswrapper[16176]: I1203 14:00:10.958067 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:10.958485 master-0 kubenswrapper[16176]: I1203 14:00:10.958400 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.959692 master-0 kubenswrapper[16176]: I1203 14:00:10.959594 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.960166 master-0 kubenswrapper[16176]: I1203 14:00:10.959916 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.960621 master-0 kubenswrapper[16176]: I1203 14:00:10.960580 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.960798 master-0 kubenswrapper[16176]: I1203 14:00:10.960762 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:10.960991 master-0 kubenswrapper[16176]: I1203 14:00:10.960930 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:10.963046 master-0 kubenswrapper[16176]: I1203 14:00:10.962581 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:10.963642 master-0 kubenswrapper[16176]: I1203 14:00:10.963571 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:10.993965 master-0 kubenswrapper[16176]: I1203 14:00:10.993339 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6z4sc"] Dec 03 14:00:11.003648 master-0 kubenswrapper[16176]: I1203 14:00:11.003583 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddwmn"] Dec 03 14:00:11.080492 master-0 kubenswrapper[16176]: I1203 14:00:11.078194 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r"] Dec 03 14:00:11.080492 master-0 kubenswrapper[16176]: I1203 14:00:11.078793 16176 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:00:11.080492 master-0 kubenswrapper[16176]: I1203 14:00:11.079082 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="6cfbc1ee6cdd01fccdd5a1a088f4d538" containerName="startup-monitor" containerID="cri-o://2927a79f39ed7802aaaf3f621d8e971809af85925fbb920aac36cdee358d7dd1" gracePeriod=5 Dec 03 14:00:11.096866 master-0 kubenswrapper[16176]: I1203 14:00:11.096792 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:11.104290 master-0 kubenswrapper[16176]: I1203 14:00:11.103320 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:11.104290 master-0 kubenswrapper[16176]: I1203 14:00:11.103743 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:11.104596 master-0 kubenswrapper[16176]: I1203 14:00:11.104541 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:11.108287 master-0 kubenswrapper[16176]: I1203 14:00:11.105780 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:11.116978 master-0 kubenswrapper[16176]: I1203 14:00:11.116870 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:11.138176 master-0 kubenswrapper[16176]: I1203 14:00:11.138102 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:00:11.246345 master-0 kubenswrapper[16176]: I1203 14:00:11.246139 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 14:00:11.301655 master-0 kubenswrapper[16176]: I1203 14:00:11.301582 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtm6s"] Dec 03 14:00:11.348248 master-0 kubenswrapper[16176]: I1203 14:00:11.348180 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:00:11.369871 master-0 kubenswrapper[16176]: I1203 14:00:11.369802 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:11.397172 master-0 kubenswrapper[16176]: I1203 14:00:11.396836 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:00:11.525523 master-0 kubenswrapper[16176]: W1203 14:00:11.525450 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b681889_eb2c_41fb_a1dc_69b99227b45b.slice/crio-03bbad3c9b433fa7e24cdbf49312f4290c38c4d016a1ffca621b84bf07dda1f5 WatchSource:0}: Error finding container 03bbad3c9b433fa7e24cdbf49312f4290c38c4d016a1ffca621b84bf07dda1f5: Status 404 returned error can't find the container with id 03bbad3c9b433fa7e24cdbf49312f4290c38c4d016a1ffca621b84bf07dda1f5 Dec 03 14:00:11.526699 master-0 kubenswrapper[16176]: W1203 14:00:11.526647 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b62b2f_1e7a_4f1b_a988_4355d93dda46.slice/crio-691b54baa66555a2d2bf0782874908569cc77c3bb805ebc101cb160e9d93408d WatchSource:0}: Error finding container 691b54baa66555a2d2bf0782874908569cc77c3bb805ebc101cb160e9d93408d: Status 404 returned error can't find the container with id 691b54baa66555a2d2bf0782874908569cc77c3bb805ebc101cb160e9d93408d Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.570759 16176 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.570880 16176 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: E1203 14:00:11.571219 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571245 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: E1203 14:00:11.571299 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571307 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: E1203 14:00:11.571319 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfbc1ee6cdd01fccdd5a1a088f4d538" containerName="startup-monitor" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571325 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfbc1ee6cdd01fccdd5a1a088f4d538" containerName="startup-monitor" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571452 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571475 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571512 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfbc1ee6cdd01fccdd5a1a088f4d538" containerName="startup-monitor" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: E1203 14:00:11.571628 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571638 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571850 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" containerID="cri-o://110755eec31f033e8a23c335b60f91bfebf8427b1bb242510e5222a12558cd35" gracePeriod=30 Dec 03 14:00:11.575935 master-0 kubenswrapper[16176]: I1203 14:00:11.571836 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78739a7694769882b7e47ea5ac08a10" containerName="kube-scheduler" Dec 03 14:00:11.576993 master-0 kubenswrapper[16176]: I1203 14:00:11.576949 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.671608 master-0 kubenswrapper[16176]: I1203 14:00:11.671548 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.671717 master-0 kubenswrapper[16176]: I1203 14:00:11.671621 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.774110 master-0 kubenswrapper[16176]: I1203 14:00:11.773986 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.774354 master-0 kubenswrapper[16176]: I1203 14:00:11.774147 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.774354 master-0 kubenswrapper[16176]: I1203 14:00:11.774171 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.774354 master-0 kubenswrapper[16176]: I1203 14:00:11.774311 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.806482 master-0 kubenswrapper[16176]: I1203 14:00:11.806431 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486d4964-18cc-4adc-b82d-b09627cadda4" path="/var/lib/kubelet/pods/486d4964-18cc-4adc-b82d-b09627cadda4/volumes" Dec 03 14:00:11.891617 master-0 kubenswrapper[16176]: I1203 14:00:11.890967 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 14:00:11.917782 master-0 kubenswrapper[16176]: I1203 14:00:11.917723 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:11.924227 master-0 kubenswrapper[16176]: I1203 14:00:11.923366 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6rjqz"] Dec 03 14:00:11.926739 master-0 kubenswrapper[16176]: I1203 14:00:11.926700 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ddwmn"] Dec 03 14:00:11.926739 master-0 kubenswrapper[16176]: I1203 14:00:11.926741 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"03bbad3c9b433fa7e24cdbf49312f4290c38c4d016a1ffca621b84bf07dda1f5"} Dec 03 14:00:11.936679 master-0 kubenswrapper[16176]: I1203 14:00:11.936230 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:00:11.942099 master-0 kubenswrapper[16176]: I1203 14:00:11.942032 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"bb6e57764d5c2c29434f1453ee95cd9466c21af1724c02846402233b67908439"} Dec 03 14:00:11.942602 master-0 kubenswrapper[16176]: I1203 14:00:11.942388 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:00:11.950571 master-0 kubenswrapper[16176]: I1203 14:00:11.950520 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"691b54baa66555a2d2bf0782874908569cc77c3bb805ebc101cb160e9d93408d"} Dec 03 14:00:11.957310 master-0 kubenswrapper[16176]: I1203 14:00:11.957174 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=30.957132719 podStartE2EDuration="30.957132719s" podCreationTimestamp="2025-12-03 13:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:11.95028814 +0000 UTC m=+102.375928822" watchObservedRunningTime="2025-12-03 14:00:11.957132719 +0000 UTC m=+102.382773391" Dec 03 14:00:11.960330 master-0 kubenswrapper[16176]: I1203 14:00:11.960286 16176 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="360b105f-8399-4d33-bfec-6253d2bad660" Dec 03 14:00:12.969743 master-0 kubenswrapper[16176]: I1203 14:00:12.969650 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"7ce27bf3ecc3ace805ad64ff7c907e040775e602f32be5d18b3ab2f410b128a0"} Dec 03 14:00:12.972185 master-0 kubenswrapper[16176]: I1203 14:00:12.972123 16176 generic.go:334] "Generic (PLEG): container finished" podID="d78739a7694769882b7e47ea5ac08a10" containerID="110755eec31f033e8a23c335b60f91bfebf8427b1bb242510e5222a12558cd35" exitCode=0 Dec 03 14:00:12.972346 master-0 kubenswrapper[16176]: I1203 14:00:12.972278 16176 scope.go:117] "RemoveContainer" containerID="b60b961d9b777de7b718dfcddaad0ec42a607b7dc8b31e285e98ecdc954d79f6" Dec 03 14:00:12.975109 master-0 kubenswrapper[16176]: I1203 14:00:12.975072 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"3448e0a3f35606c9594f8a7bf33b0cdd9fd90d740c89dc5c58476c524a180d4e"} Dec 03 14:00:12.976640 master-0 kubenswrapper[16176]: I1203 14:00:12.976597 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"5b897834f693b15d6c895d9748f4236c069b41b71d42c3fce4d9a8363e167436"} Dec 03 14:00:12.977859 master-0 kubenswrapper[16176]: I1203 14:00:12.977838 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"8a36580bf7e65538540744fa97a767d9b90dcb01d44e053da5be66f84c534850"} Dec 03 14:00:12.982460 master-0 kubenswrapper[16176]: I1203 14:00:12.982406 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"da69afae29446aeabbed105db5e7cd5bf817f6d679bc553eefcd18107527ad0c"} Dec 03 14:00:12.984099 master-0 kubenswrapper[16176]: I1203 14:00:12.984045 16176 generic.go:334] "Generic (PLEG): container finished" podID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" containerID="296cf7d08c2a38ce14296567e7b95dead04de3b7bcb6bac3f6e692cbdb93718e" exitCode=0 Dec 03 14:00:12.984164 master-0 kubenswrapper[16176]: I1203 14:00:12.984117 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5dcaccc5-46b1-4a38-b3af-6839dec529d3","Type":"ContainerDied","Data":"296cf7d08c2a38ce14296567e7b95dead04de3b7bcb6bac3f6e692cbdb93718e"} Dec 03 14:00:12.986110 master-0 kubenswrapper[16176]: I1203 14:00:12.986077 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"fbdaf01cbfe994f2dae86341472aa776f7166e138673e691ad073797fa8ee297"} Dec 03 14:00:13.209865 master-0 kubenswrapper[16176]: I1203 14:00:13.209779 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:00:13.214622 master-0 kubenswrapper[16176]: I1203 14:00:13.214576 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:00:13.800448 master-0 kubenswrapper[16176]: I1203 14:00:13.800363 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03494fce-881e-4eb6-bc3d-570f1d8e7c52" path="/var/lib/kubelet/pods/03494fce-881e-4eb6-bc3d-570f1d8e7c52/volumes" Dec 03 14:00:13.991603 master-0 kubenswrapper[16176]: I1203 14:00:13.991500 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:00:14.418757 master-0 kubenswrapper[16176]: I1203 14:00:14.418716 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:00:14.515337 master-0 kubenswrapper[16176]: I1203 14:00:14.515250 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") pod \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " Dec 03 14:00:14.515337 master-0 kubenswrapper[16176]: I1203 14:00:14.515326 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") pod \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " Dec 03 14:00:14.515690 master-0 kubenswrapper[16176]: I1203 14:00:14.515405 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") pod \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\" (UID: \"5dcaccc5-46b1-4a38-b3af-6839dec529d3\") " Dec 03 14:00:14.515690 master-0 kubenswrapper[16176]: I1203 14:00:14.515596 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock" (OuterVolumeSpecName: "var-lock") pod "5dcaccc5-46b1-4a38-b3af-6839dec529d3" (UID: "5dcaccc5-46b1-4a38-b3af-6839dec529d3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:14.515786 master-0 kubenswrapper[16176]: I1203 14:00:14.515710 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5dcaccc5-46b1-4a38-b3af-6839dec529d3" (UID: "5dcaccc5-46b1-4a38-b3af-6839dec529d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:14.518703 master-0 kubenswrapper[16176]: I1203 14:00:14.518640 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5dcaccc5-46b1-4a38-b3af-6839dec529d3" (UID: "5dcaccc5-46b1-4a38-b3af-6839dec529d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:14.617618 master-0 kubenswrapper[16176]: I1203 14:00:14.617448 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:14.617618 master-0 kubenswrapper[16176]: I1203 14:00:14.617518 16176 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:14.617618 master-0 kubenswrapper[16176]: I1203 14:00:14.617531 16176 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5dcaccc5-46b1-4a38-b3af-6839dec529d3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:15.015484 master-0 kubenswrapper[16176]: I1203 14:00:15.015343 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:00:15.024779 master-0 kubenswrapper[16176]: I1203 14:00:15.024709 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"5dcaccc5-46b1-4a38-b3af-6839dec529d3","Type":"ContainerDied","Data":"8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d"} Dec 03 14:00:15.024779 master-0 kubenswrapper[16176]: I1203 14:00:15.024783 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcabcf0ace4fc4b09b1bce1efa0914d0f6cd9056224be4cc9e1aaf8384c6f7d" Dec 03 14:00:15.962403 master-0 kubenswrapper[16176]: I1203 14:00:15.962328 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:00:15.962690 master-0 kubenswrapper[16176]: I1203 14:00:15.962422 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:00:15.962690 master-0 kubenswrapper[16176]: I1203 14:00:15.962345 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:00:15.962690 master-0 kubenswrapper[16176]: I1203 14:00:15.962513 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:00:17.029904 master-0 kubenswrapper[16176]: I1203 14:00:17.029801 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"269c3b3e3a48a9f1344279f8f2e0e573cda9ba0c876995af4491a1c927200ebe"} Dec 03 14:00:17.031369 master-0 kubenswrapper[16176]: I1203 14:00:17.031327 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"af5dcdfc80a9aed23308058402ef5ee8200ed7993e5f2d9cd60e937998ff7919"} Dec 03 14:00:17.033394 master-0 kubenswrapper[16176]: I1203 14:00:17.033338 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_6cfbc1ee6cdd01fccdd5a1a088f4d538/startup-monitor/0.log" Dec 03 14:00:17.033470 master-0 kubenswrapper[16176]: I1203 14:00:17.033421 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfbc1ee6cdd01fccdd5a1a088f4d538" containerID="2927a79f39ed7802aaaf3f621d8e971809af85925fbb920aac36cdee358d7dd1" exitCode=137 Dec 03 14:00:17.035738 master-0 kubenswrapper[16176]: I1203 14:00:17.035698 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df1b15d7a8d6cd37f66cabd1c573d44fa160da9f8ea27336e469bd1ae44427d" Dec 03 14:00:17.037408 master-0 kubenswrapper[16176]: I1203 14:00:17.037373 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 14:00:17.037946 master-0 kubenswrapper[16176]: I1203 14:00:17.037900 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"b70d802ed5349d93be4b21929843e3c3a0b76580d514cea5aa17e96cf9684487"} Dec 03 14:00:17.039736 master-0 kubenswrapper[16176]: I1203 14:00:17.039686 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd"} Dec 03 14:00:17.157419 master-0 kubenswrapper[16176]: I1203 14:00:17.157220 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") pod \"d78739a7694769882b7e47ea5ac08a10\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " Dec 03 14:00:17.157419 master-0 kubenswrapper[16176]: I1203 14:00:17.157332 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") pod \"d78739a7694769882b7e47ea5ac08a10\" (UID: \"d78739a7694769882b7e47ea5ac08a10\") " Dec 03 14:00:17.157419 master-0 kubenswrapper[16176]: I1203 14:00:17.157411 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets" (OuterVolumeSpecName: "secrets") pod "d78739a7694769882b7e47ea5ac08a10" (UID: "d78739a7694769882b7e47ea5ac08a10"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:17.157760 master-0 kubenswrapper[16176]: I1203 14:00:17.157537 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs" (OuterVolumeSpecName: "logs") pod "d78739a7694769882b7e47ea5ac08a10" (UID: "d78739a7694769882b7e47ea5ac08a10"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:17.157795 master-0 kubenswrapper[16176]: I1203 14:00:17.157762 16176 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-secrets\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:17.157795 master-0 kubenswrapper[16176]: I1203 14:00:17.157785 16176 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/d78739a7694769882b7e47ea5ac08a10-logs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:17.801918 master-0 kubenswrapper[16176]: I1203 14:00:17.801849 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78739a7694769882b7e47ea5ac08a10" path="/var/lib/kubelet/pods/d78739a7694769882b7e47ea5ac08a10/volumes" Dec 03 14:00:17.802361 master-0 kubenswrapper[16176]: I1203 14:00:17.802323 16176 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Dec 03 14:00:17.918444 master-0 kubenswrapper[16176]: I1203 14:00:17.918401 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_6cfbc1ee6cdd01fccdd5a1a088f4d538/startup-monitor/0.log" Dec 03 14:00:17.918655 master-0 kubenswrapper[16176]: I1203 14:00:17.918485 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:00:18.046484 master-0 kubenswrapper[16176]: I1203 14:00:18.046432 16176 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd" exitCode=0 Dec 03 14:00:18.048873 master-0 kubenswrapper[16176]: I1203 14:00:18.048848 16176 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="af5dcdfc80a9aed23308058402ef5ee8200ed7993e5f2d9cd60e937998ff7919" exitCode=0 Dec 03 14:00:18.051968 master-0 kubenswrapper[16176]: I1203 14:00:18.051919 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_6cfbc1ee6cdd01fccdd5a1a088f4d538/startup-monitor/0.log" Dec 03 14:00:18.052105 master-0 kubenswrapper[16176]: I1203 14:00:18.052043 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Dec 03 14:00:18.052105 master-0 kubenswrapper[16176]: I1203 14:00:18.052060 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:00:18.074240 master-0 kubenswrapper[16176]: I1203 14:00:18.074128 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") pod \"6cfbc1ee6cdd01fccdd5a1a088f4d538\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " Dec 03 14:00:18.074240 master-0 kubenswrapper[16176]: I1203 14:00:18.074187 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") pod \"6cfbc1ee6cdd01fccdd5a1a088f4d538\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " Dec 03 14:00:18.074384 master-0 kubenswrapper[16176]: I1203 14:00:18.074335 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") pod \"6cfbc1ee6cdd01fccdd5a1a088f4d538\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " Dec 03 14:00:18.074384 master-0 kubenswrapper[16176]: I1203 14:00:18.074370 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") pod \"6cfbc1ee6cdd01fccdd5a1a088f4d538\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " Dec 03 14:00:18.074384 master-0 kubenswrapper[16176]: I1203 14:00:18.074367 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock" (OuterVolumeSpecName: "var-lock") pod "6cfbc1ee6cdd01fccdd5a1a088f4d538" (UID: "6cfbc1ee6cdd01fccdd5a1a088f4d538"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:18.074479 master-0 kubenswrapper[16176]: I1203 14:00:18.074421 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "6cfbc1ee6cdd01fccdd5a1a088f4d538" (UID: "6cfbc1ee6cdd01fccdd5a1a088f4d538"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:18.074479 master-0 kubenswrapper[16176]: I1203 14:00:18.074434 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") pod \"6cfbc1ee6cdd01fccdd5a1a088f4d538\" (UID: \"6cfbc1ee6cdd01fccdd5a1a088f4d538\") " Dec 03 14:00:18.074742 master-0 kubenswrapper[16176]: I1203 14:00:18.074693 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log" (OuterVolumeSpecName: "var-log") pod "6cfbc1ee6cdd01fccdd5a1a088f4d538" (UID: "6cfbc1ee6cdd01fccdd5a1a088f4d538"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:18.074862 master-0 kubenswrapper[16176]: I1203 14:00:18.074717 16176 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:18.074936 master-0 kubenswrapper[16176]: I1203 14:00:18.074871 16176 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:18.074936 master-0 kubenswrapper[16176]: I1203 14:00:18.074756 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests" (OuterVolumeSpecName: "manifests") pod "6cfbc1ee6cdd01fccdd5a1a088f4d538" (UID: "6cfbc1ee6cdd01fccdd5a1a088f4d538"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:18.079780 master-0 kubenswrapper[16176]: I1203 14:00:18.079719 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "6cfbc1ee6cdd01fccdd5a1a088f4d538" (UID: "6cfbc1ee6cdd01fccdd5a1a088f4d538"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:00:18.177134 master-0 kubenswrapper[16176]: I1203 14:00:18.177060 16176 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:18.177134 master-0 kubenswrapper[16176]: I1203 14:00:18.177108 16176 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-var-log\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:18.177134 master-0 kubenswrapper[16176]: I1203 14:00:18.177121 16176 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/6cfbc1ee6cdd01fccdd5a1a088f4d538-manifests\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:18.961099 master-0 kubenswrapper[16176]: I1203 14:00:18.961012 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:00:18.961099 master-0 kubenswrapper[16176]: I1203 14:00:18.961100 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:00:18.961506 master-0 kubenswrapper[16176]: I1203 14:00:18.961194 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:00:18.961506 master-0 kubenswrapper[16176]: I1203 14:00:18.961316 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:00:19.262032 master-0 kubenswrapper[16176]: W1203 14:00:19.261970 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c6fa89f_268c_477b_9f04_238d2305cc89.slice/crio-e078d164386141b93f1da09365f14eb78ba0a075e0b0759ddf2e5cf022002753 WatchSource:0}: Error finding container e078d164386141b93f1da09365f14eb78ba0a075e0b0759ddf2e5cf022002753: Status 404 returned error can't find the container with id e078d164386141b93f1da09365f14eb78ba0a075e0b0759ddf2e5cf022002753 Dec 03 14:00:20.691670 master-0 kubenswrapper[16176]: E1203 14:00:20.691589 16176 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.899s" Dec 03 14:00:20.709069 master-0 kubenswrapper[16176]: I1203 14:00:20.708994 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfbc1ee6cdd01fccdd5a1a088f4d538" path="/var/lib/kubelet/pods/6cfbc1ee6cdd01fccdd5a1a088f4d538/volumes" Dec 03 14:00:20.709402 master-0 kubenswrapper[16176]: I1203 14:00:20.709355 16176 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Dec 03 14:00:21.018309 master-0 kubenswrapper[16176]: I1203 14:00:21.018205 16176 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Dec 03 14:00:21.093292 master-0 kubenswrapper[16176]: I1203 14:00:21.093211 16176 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="28783bcd4dcd4aa0a02c669250a6c16e917370eab9ca274d5505b517c9874415" exitCode=0 Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: E1203 14:00:21.700180 16176 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.008s" Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700427 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700462 16176 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="bf0f2d5e-078b-420b-869a-5b9e44f18985" Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700544 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6z4sc"] Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700624 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r"] Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700697 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700800 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"af5dcdfc80a9aed23308058402ef5ee8200ed7993e5f2d9cd60e937998ff7919"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700912 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701017 16176 scope.go:117] "RemoveContainer" containerID="2927a79f39ed7802aaaf3f621d8e971809af85925fbb920aac36cdee358d7dd1" Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.700939 16176 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="bf0f2d5e-078b-420b-869a-5b9e44f18985" Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701595 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701666 16176 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="360b105f-8399-4d33-bfec-6253d2bad660" Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701694 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701783 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"8c5297427b396fa8732c10042afbb91ea37eb70462659d5bb64cdcf4bc7a43ac"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701861 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"a5941838e14d0f940c2698afe59198536927da52075e6f841e402e01f0ec92bf"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701895 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"e078d164386141b93f1da09365f14eb78ba0a075e0b0759ddf2e5cf022002753"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.701969 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.702000 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"28783bcd4dcd4aa0a02c669250a6c16e917370eab9ca274d5505b517c9874415"} Dec 03 14:00:21.703685 master-0 kubenswrapper[16176]: I1203 14:00:21.702079 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"3a804dba6904156085c90f6cda9cd5712202105d18772a319912b1b6826d11b6"} Dec 03 14:00:21.719923 master-0 kubenswrapper[16176]: I1203 14:00:21.719837 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:00:21.720094 master-0 kubenswrapper[16176]: I1203 14:00:21.719916 16176 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="360b105f-8399-4d33-bfec-6253d2bad660" Dec 03 14:00:21.993032 master-0 kubenswrapper[16176]: I1203 14:00:21.987814 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:00:22.120183 master-0 kubenswrapper[16176]: I1203 14:00:22.120122 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"0c03e8e688624f8100b3363cdd3745128b8e51e4e7927b4fa1f8b7fa1283a77a"} Dec 03 14:00:22.124658 master-0 kubenswrapper[16176]: I1203 14:00:22.124527 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2"} Dec 03 14:00:22.125308 master-0 kubenswrapper[16176]: I1203 14:00:22.125241 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:00:22.129370 master-0 kubenswrapper[16176]: I1203 14:00:22.129297 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"719e2a700ac797c5115a604641d9dde4f00ed6896482f7bbedc5373a5a8d1d2b"} Dec 03 14:00:22.340607 master-0 kubenswrapper[16176]: I1203 14:00:22.340280 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" podStartSLOduration=61.340226407 podStartE2EDuration="1m1.340226407s" podCreationTimestamp="2025-12-03 13:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:22.317101054 +0000 UTC m=+112.742741726" watchObservedRunningTime="2025-12-03 14:00:22.340226407 +0000 UTC m=+112.765867079" Dec 03 14:00:22.360450 master-0 kubenswrapper[16176]: I1203 14:00:22.360185 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podStartSLOduration=60.360155736 podStartE2EDuration="1m0.360155736s" podCreationTimestamp="2025-12-03 13:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:22.34139161 +0000 UTC m=+112.767032282" watchObservedRunningTime="2025-12-03 14:00:22.360155736 +0000 UTC m=+112.785796388" Dec 03 14:00:22.378519 master-0 kubenswrapper[16176]: I1203 14:00:22.378424 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" podStartSLOduration=70.378399107 podStartE2EDuration="1m10.378399107s" podCreationTimestamp="2025-12-03 13:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:22.377917113 +0000 UTC m=+112.803557795" watchObservedRunningTime="2025-12-03 14:00:22.378399107 +0000 UTC m=+112.804039769" Dec 03 14:00:22.378771 master-0 kubenswrapper[16176]: I1203 14:00:22.378729 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=11.378723177 podStartE2EDuration="11.378723177s" podCreationTimestamp="2025-12-03 14:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:22.362547626 +0000 UTC m=+112.788188298" watchObservedRunningTime="2025-12-03 14:00:22.378723177 +0000 UTC m=+112.804363849" Dec 03 14:00:22.923892 master-0 kubenswrapper[16176]: I1203 14:00:22.923818 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-54f97f57-rr9px"] Dec 03 14:00:22.924566 master-0 kubenswrapper[16176]: E1203 14:00:22.924278 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" containerName="installer" Dec 03 14:00:22.924566 master-0 kubenswrapper[16176]: I1203 14:00:22.924301 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" containerName="installer" Dec 03 14:00:22.924566 master-0 kubenswrapper[16176]: I1203 14:00:22.924512 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcaccc5-46b1-4a38-b3af-6839dec529d3" containerName="installer" Dec 03 14:00:22.925241 master-0 kubenswrapper[16176]: I1203 14:00:22.925211 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:22.926055 master-0 kubenswrapper[16176]: I1203 14:00:22.925986 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2"] Dec 03 14:00:22.927095 master-0 kubenswrapper[16176]: I1203 14:00:22.927054 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:00:22.928167 master-0 kubenswrapper[16176]: I1203 14:00:22.928102 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:00:22.928167 master-0 kubenswrapper[16176]: I1203 14:00:22.928142 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:00:22.928311 master-0 kubenswrapper[16176]: I1203 14:00:22.928223 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:00:22.928437 master-0 kubenswrapper[16176]: I1203 14:00:22.928388 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:00:22.928634 master-0 kubenswrapper[16176]: I1203 14:00:22.928613 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:00:22.928794 master-0 kubenswrapper[16176]: I1203 14:00:22.928733 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:00:22.931011 master-0 kubenswrapper[16176]: I1203 14:00:22.930948 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6"] Dec 03 14:00:22.932231 master-0 kubenswrapper[16176]: I1203 14:00:22.932194 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:22.935394 master-0 kubenswrapper[16176]: I1203 14:00:22.935346 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8zh52" Dec 03 14:00:22.935547 master-0 kubenswrapper[16176]: I1203 14:00:22.935345 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 03 14:00:22.942499 master-0 kubenswrapper[16176]: I1203 14:00:22.942427 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2"] Dec 03 14:00:22.946725 master-0 kubenswrapper[16176]: I1203 14:00:22.946678 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6"] Dec 03 14:00:22.956825 master-0 kubenswrapper[16176]: I1203 14:00:22.956772 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:00:22.957047 master-0 kubenswrapper[16176]: I1203 14:00:22.956959 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:00:22.960728 master-0 kubenswrapper[16176]: I1203 14:00:22.960693 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:00:22.962482 master-0 kubenswrapper[16176]: I1203 14:00:22.961439 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:00:23.013394 master-0 kubenswrapper[16176]: I1203 14:00:23.013052 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:00:23.021624 master-0 kubenswrapper[16176]: I1203 14:00:23.019513 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:00:23.055375 master-0 kubenswrapper[16176]: I1203 14:00:23.055221 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.055375 master-0 kubenswrapper[16176]: I1203 14:00:23.055375 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.055690 master-0 kubenswrapper[16176]: I1203 14:00:23.055403 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.055690 master-0 kubenswrapper[16176]: I1203 14:00:23.055496 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.055690 master-0 kubenswrapper[16176]: I1203 14:00:23.055534 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:00:23.055690 master-0 kubenswrapper[16176]: I1203 14:00:23.055563 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:23.055690 master-0 kubenswrapper[16176]: I1203 14:00:23.055583 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.160760 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.160857 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.160905 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.160938 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.160983 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.161018 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.161044 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.162358 master-0 kubenswrapper[16176]: I1203 14:00:23.162224 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.169287 master-0 kubenswrapper[16176]: I1203 14:00:23.166373 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.169287 master-0 kubenswrapper[16176]: I1203 14:00:23.166388 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:23.170503 master-0 kubenswrapper[16176]: I1203 14:00:23.170242 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.170503 master-0 kubenswrapper[16176]: I1203 14:00:23.170390 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.182659 master-0 kubenswrapper[16176]: I1203 14:00:23.182434 16176 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="d433101d60e298c30c7d105d80b92d8e74a6cb93a14fb671416c963c3c89b31b" exitCode=0 Dec 03 14:00:23.182659 master-0 kubenswrapper[16176]: I1203 14:00:23.182614 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"d433101d60e298c30c7d105d80b92d8e74a6cb93a14fb671416c963c3c89b31b"} Dec 03 14:00:23.211544 master-0 kubenswrapper[16176]: I1203 14:00:23.205677 16176 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="719e2a700ac797c5115a604641d9dde4f00ed6896482f7bbedc5373a5a8d1d2b" exitCode=0 Dec 03 14:00:23.211544 master-0 kubenswrapper[16176]: I1203 14:00:23.205887 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"719e2a700ac797c5115a604641d9dde4f00ed6896482f7bbedc5373a5a8d1d2b"} Dec 03 14:00:23.211544 master-0 kubenswrapper[16176]: I1203 14:00:23.205955 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"5cd949ac5ee2d4c762f3178ef1d027c253025d3b527c8391e3b7c924cb4b23dd"} Dec 03 14:00:23.227284 master-0 kubenswrapper[16176]: I1203 14:00:23.217074 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:00:23.249287 master-0 kubenswrapper[16176]: I1203 14:00:23.245288 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.285219 master-0 kubenswrapper[16176]: I1203 14:00:23.285149 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:00:23.290881 master-0 kubenswrapper[16176]: I1203 14:00:23.290821 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:23.300794 master-0 kubenswrapper[16176]: I1203 14:00:23.300734 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:00:23.313819 master-0 kubenswrapper[16176]: I1203 14:00:23.313651 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:00:23.348564 master-0 kubenswrapper[16176]: I1203 14:00:23.348463 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ddwmn" podStartSLOduration=63.871084978 podStartE2EDuration="1m8.348432518s" podCreationTimestamp="2025-12-03 13:59:15 +0000 UTC" firstStartedPulling="2025-12-03 14:00:18.050382651 +0000 UTC m=+108.476023313" lastFinishedPulling="2025-12-03 14:00:22.527730191 +0000 UTC m=+112.953370853" observedRunningTime="2025-12-03 14:00:23.312487992 +0000 UTC m=+113.738128664" watchObservedRunningTime="2025-12-03 14:00:23.348432518 +0000 UTC m=+113.774073180" Dec 03 14:00:23.353307 master-0 kubenswrapper[16176]: I1203 14:00:23.353270 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:23.801646 master-0 kubenswrapper[16176]: I1203 14:00:23.801605 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2"] Dec 03 14:00:23.866976 master-0 kubenswrapper[16176]: I1203 14:00:23.865974 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6"] Dec 03 14:00:24.218762 master-0 kubenswrapper[16176]: I1203 14:00:24.218689 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"43ae0856c507c2bff378887cd0a84b438d3de6bb78d726d6ebc950e521af94bd"} Dec 03 14:00:24.220668 master-0 kubenswrapper[16176]: I1203 14:00:24.220594 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"a7aa41d9e38b13d29dc5083ad595f0fd8c144ec1dcf13aaa52a5ebc319548d4b"} Dec 03 14:00:24.225211 master-0 kubenswrapper[16176]: I1203 14:00:24.225164 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"e16025ed04953d73a838df3bbba4fe82854213e488b25790d3df13f916b39c4b"} Dec 03 14:00:24.225324 master-0 kubenswrapper[16176]: I1203 14:00:24.225216 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"3e027a2432753bc82aec3efe7b9ca2e5880a38a34b122a4aa5bbed8c2d285f9e"} Dec 03 14:00:24.227023 master-0 kubenswrapper[16176]: I1203 14:00:24.226977 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"2567af11c3f7c202e692cb205ae16d329fb818830770c2ee88f44b62c822c58c"} Dec 03 14:00:24.245081 master-0 kubenswrapper[16176]: I1203 14:00:24.244966 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6z4sc" podStartSLOduration=66.930830576 podStartE2EDuration="1m9.24493865s" podCreationTimestamp="2025-12-03 13:59:15 +0000 UTC" firstStartedPulling="2025-12-03 14:00:21.374506681 +0000 UTC m=+111.800147343" lastFinishedPulling="2025-12-03 14:00:23.688614755 +0000 UTC m=+114.114255417" observedRunningTime="2025-12-03 14:00:24.243332963 +0000 UTC m=+114.668973635" watchObservedRunningTime="2025-12-03 14:00:24.24493865 +0000 UTC m=+114.670579312" Dec 03 14:00:24.262499 master-0 kubenswrapper[16176]: I1203 14:00:24.262401 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podStartSLOduration=366.262371017 podStartE2EDuration="6m6.262371017s" podCreationTimestamp="2025-12-03 13:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:24.260468872 +0000 UTC m=+114.686109534" watchObservedRunningTime="2025-12-03 14:00:24.262371017 +0000 UTC m=+114.688011679" Dec 03 14:00:24.353355 master-0 kubenswrapper[16176]: I1203 14:00:24.348864 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:00:30.233487 master-0 kubenswrapper[16176]: I1203 14:00:30.233431 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vkpv4"] Dec 03 14:00:30.237997 master-0 kubenswrapper[16176]: I1203 14:00:30.237958 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.240514 master-0 kubenswrapper[16176]: I1203 14:00:30.240390 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l6rgr" Dec 03 14:00:30.242604 master-0 kubenswrapper[16176]: I1203 14:00:30.240630 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 14:00:30.242604 master-0 kubenswrapper[16176]: I1203 14:00:30.240625 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 14:00:30.242604 master-0 kubenswrapper[16176]: I1203 14:00:30.240727 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 14:00:30.282753 master-0 kubenswrapper[16176]: I1203 14:00:30.282426 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vkpv4"] Dec 03 14:00:30.326422 master-0 kubenswrapper[16176]: I1203 14:00:30.317148 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 14:00:30.326422 master-0 kubenswrapper[16176]: I1203 14:00:30.317623 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" containerID="cri-o://198e100a87c3ebf3cf56cbb72aee221b10d3d9da6179d5cd2d009567c565ee93" gracePeriod=30 Dec 03 14:00:30.326422 master-0 kubenswrapper[16176]: I1203 14:00:30.322692 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 14:00:30.326422 master-0 kubenswrapper[16176]: I1203 14:00:30.323051 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" containerName="controller-manager" containerID="cri-o://d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8" gracePeriod=30 Dec 03 14:00:30.331908 master-0 kubenswrapper[16176]: I1203 14:00:30.331723 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-77df56447c-vsrxx"] Dec 03 14:00:30.332780 master-0 kubenswrapper[16176]: I1203 14:00:30.332734 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.361300 master-0 kubenswrapper[16176]: I1203 14:00:30.343432 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 14:00:30.361300 master-0 kubenswrapper[16176]: I1203 14:00:30.344440 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 14:00:30.361300 master-0 kubenswrapper[16176]: I1203 14:00:30.344712 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 14:00:30.361300 master-0 kubenswrapper[16176]: I1203 14:00:30.345798 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7n524" Dec 03 14:00:30.361300 master-0 kubenswrapper[16176]: I1203 14:00:30.347354 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 14:00:30.384963 master-0 kubenswrapper[16176]: I1203 14:00:30.381060 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-77df56447c-vsrxx"] Dec 03 14:00:30.390288 master-0 kubenswrapper[16176]: I1203 14:00:30.388185 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 14:00:30.394492 master-0 kubenswrapper[16176]: I1203 14:00:30.393283 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.394492 master-0 kubenswrapper[16176]: I1203 14:00:30.393372 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.506127 master-0 kubenswrapper[16176]: I1203 14:00:30.506077 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.506328 master-0 kubenswrapper[16176]: I1203 14:00:30.506187 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.506373 master-0 kubenswrapper[16176]: I1203 14:00:30.506352 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.506430 master-0 kubenswrapper[16176]: I1203 14:00:30.506384 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.506568 master-0 kubenswrapper[16176]: I1203 14:00:30.506553 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.506604 master-0 kubenswrapper[16176]: I1203 14:00:30.506584 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.510888 master-0 kubenswrapper[16176]: I1203 14:00:30.510815 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.512207 master-0 kubenswrapper[16176]: I1203 14:00:30.512177 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl"] Dec 03 14:00:30.513213 master-0 kubenswrapper[16176]: I1203 14:00:30.513193 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.516380 master-0 kubenswrapper[16176]: I1203 14:00:30.516205 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:00:30.516517 master-0 kubenswrapper[16176]: I1203 14:00:30.516487 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-qldhm" Dec 03 14:00:30.549323 master-0 kubenswrapper[16176]: I1203 14:00:30.544917 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl"] Dec 03 14:00:30.556808 master-0 kubenswrapper[16176]: I1203 14:00:30.556306 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608136 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608232 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608374 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608433 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608473 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zllx\" (UniqueName: \"kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608508 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.608542 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.609732 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.612420 master-0 kubenswrapper[16176]: I1203 14:00:30.610732 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.635442 master-0 kubenswrapper[16176]: I1203 14:00:30.626927 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.635442 master-0 kubenswrapper[16176]: I1203 14:00:30.634797 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.662982 master-0 kubenswrapper[16176]: I1203 14:00:30.662682 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:00:30.710093 master-0 kubenswrapper[16176]: I1203 14:00:30.709673 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.710093 master-0 kubenswrapper[16176]: I1203 14:00:30.709877 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.710093 master-0 kubenswrapper[16176]: I1203 14:00:30.709938 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zllx\" (UniqueName: \"kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.713300 master-0 kubenswrapper[16176]: I1203 14:00:30.713230 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.718356 master-0 kubenswrapper[16176]: I1203 14:00:30.717440 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.731326 master-0 kubenswrapper[16176]: I1203 14:00:30.731158 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zllx\" (UniqueName: \"kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx\") pod \"collect-profiles-29412840-nfbpl\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:30.874893 master-0 kubenswrapper[16176]: I1203 14:00:30.874794 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:30.940864 master-0 kubenswrapper[16176]: I1203 14:00:30.940799 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:31.117394 master-0 kubenswrapper[16176]: I1203 14:00:31.117309 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:31.117654 master-0 kubenswrapper[16176]: I1203 14:00:31.117413 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:31.121201 master-0 kubenswrapper[16176]: I1203 14:00:31.121155 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 14:00:31.126916 master-0 kubenswrapper[16176]: I1203 14:00:31.126868 16176 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:00:31.213309 master-0 kubenswrapper[16176]: I1203 14:00:31.212934 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:31.221187 master-0 kubenswrapper[16176]: I1203 14:00:31.221059 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") pod \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " Dec 03 14:00:31.221187 master-0 kubenswrapper[16176]: I1203 14:00:31.221158 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") pod \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " Dec 03 14:00:31.221373 master-0 kubenswrapper[16176]: I1203 14:00:31.221201 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") pod \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " Dec 03 14:00:31.221672 master-0 kubenswrapper[16176]: I1203 14:00:31.221627 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") pod \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " Dec 03 14:00:31.221733 master-0 kubenswrapper[16176]: I1203 14:00:31.221714 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") pod \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\" (UID: \"f5f23b6d-8303-46d8-892e-8e2c01b567b5\") " Dec 03 14:00:31.221891 master-0 kubenswrapper[16176]: I1203 14:00:31.221836 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca" (OuterVolumeSpecName: "client-ca") pod "f5f23b6d-8303-46d8-892e-8e2c01b567b5" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:31.222154 master-0 kubenswrapper[16176]: I1203 14:00:31.222111 16176 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:31.223063 master-0 kubenswrapper[16176]: I1203 14:00:31.223020 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config" (OuterVolumeSpecName: "config") pod "f5f23b6d-8303-46d8-892e-8e2c01b567b5" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:31.223736 master-0 kubenswrapper[16176]: I1203 14:00:31.223693 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f5f23b6d-8303-46d8-892e-8e2c01b567b5" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:31.230096 master-0 kubenswrapper[16176]: I1203 14:00:31.230023 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq" (OuterVolumeSpecName: "kube-api-access-8xrdq") pod "f5f23b6d-8303-46d8-892e-8e2c01b567b5" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5"). InnerVolumeSpecName "kube-api-access-8xrdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:31.237915 master-0 kubenswrapper[16176]: I1203 14:00:31.235480 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f5f23b6d-8303-46d8-892e-8e2c01b567b5" (UID: "f5f23b6d-8303-46d8-892e-8e2c01b567b5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:00:31.325064 master-0 kubenswrapper[16176]: I1203 14:00:31.324977 16176 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:31.325064 master-0 kubenswrapper[16176]: I1203 14:00:31.325043 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xrdq\" (UniqueName: \"kubernetes.io/projected/f5f23b6d-8303-46d8-892e-8e2c01b567b5-kube-api-access-8xrdq\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:31.325064 master-0 kubenswrapper[16176]: I1203 14:00:31.325059 16176 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5f23b6d-8303-46d8-892e-8e2c01b567b5-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:31.325229 master-0 kubenswrapper[16176]: I1203 14:00:31.325072 16176 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f5f23b6d-8303-46d8-892e-8e2c01b567b5-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:31.328136 master-0 kubenswrapper[16176]: I1203 14:00:31.326086 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vkpv4"] Dec 03 14:00:31.331075 master-0 kubenswrapper[16176]: W1203 14:00:31.331016 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3675c78_1902_4b92_8a93_cf2dc316f060.slice/crio-877432d5f2de2602369a3a7cb506b320c382d202970875ee79a84ac07c222d1f WatchSource:0}: Error finding container 877432d5f2de2602369a3a7cb506b320c382d202970875ee79a84ac07c222d1f: Status 404 returned error can't find the container with id 877432d5f2de2602369a3a7cb506b320c382d202970875ee79a84ac07c222d1f Dec 03 14:00:31.332462 master-0 kubenswrapper[16176]: I1203 14:00:31.332417 16176 generic.go:334] "Generic (PLEG): container finished" podID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" containerID="d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8" exitCode=0 Dec 03 14:00:31.332528 master-0 kubenswrapper[16176]: I1203 14:00:31.332497 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" event={"ID":"f5f23b6d-8303-46d8-892e-8e2c01b567b5","Type":"ContainerDied","Data":"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8"} Dec 03 14:00:31.332563 master-0 kubenswrapper[16176]: I1203 14:00:31.332535 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" event={"ID":"f5f23b6d-8303-46d8-892e-8e2c01b567b5","Type":"ContainerDied","Data":"80e95cd74710420c097c7cf837380f44e3fef76745b76b26d24bb3a848d0ba8d"} Dec 03 14:00:31.332563 master-0 kubenswrapper[16176]: I1203 14:00:31.332559 16176 scope.go:117] "RemoveContainer" containerID="d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8" Dec 03 14:00:31.332731 master-0 kubenswrapper[16176]: I1203 14:00:31.332702 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d8fb964c9-v2h98" Dec 03 14:00:31.337099 master-0 kubenswrapper[16176]: I1203 14:00:31.336891 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"82bf6f53b6a48ad08a379c0ecf47b28f74fed2b4944e445cff567b57072a04d7"} Dec 03 14:00:31.340213 master-0 kubenswrapper[16176]: I1203 14:00:31.339933 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:31.348451 master-0 kubenswrapper[16176]: I1203 14:00:31.347725 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:00:31.351691 master-0 kubenswrapper[16176]: I1203 14:00:31.350055 16176 generic.go:334] "Generic (PLEG): container finished" podID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerID="198e100a87c3ebf3cf56cbb72aee221b10d3d9da6179d5cd2d009567c565ee93" exitCode=0 Dec 03 14:00:31.351691 master-0 kubenswrapper[16176]: I1203 14:00:31.350097 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" event={"ID":"ecc68b17-9112-471d-89f9-15bf30dfa004","Type":"ContainerDied","Data":"198e100a87c3ebf3cf56cbb72aee221b10d3d9da6179d5cd2d009567c565ee93"} Dec 03 14:00:31.354973 master-0 kubenswrapper[16176]: I1203 14:00:31.354874 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"df9d753b9504b39f1407a8e39cf00198f72311127c861e3310dc84c640d8fb5e"} Dec 03 14:00:31.365823 master-0 kubenswrapper[16176]: I1203 14:00:31.365550 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podStartSLOduration=253.015122287 podStartE2EDuration="4m19.36553007s" podCreationTimestamp="2025-12-03 13:56:12 +0000 UTC" firstStartedPulling="2025-12-03 14:00:23.879368544 +0000 UTC m=+114.305009206" lastFinishedPulling="2025-12-03 14:00:30.229776327 +0000 UTC m=+120.655416989" observedRunningTime="2025-12-03 14:00:31.36245463 +0000 UTC m=+121.788095302" watchObservedRunningTime="2025-12-03 14:00:31.36553007 +0000 UTC m=+121.791170732" Dec 03 14:00:31.370347 master-0 kubenswrapper[16176]: I1203 14:00:31.369958 16176 scope.go:117] "RemoveContainer" containerID="d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8" Dec 03 14:00:31.370571 master-0 kubenswrapper[16176]: I1203 14:00:31.370518 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:31.370571 master-0 kubenswrapper[16176]: I1203 14:00:31.370569 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:31.374351 master-0 kubenswrapper[16176]: E1203 14:00:31.371596 16176 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8\": container with ID starting with d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8 not found: ID does not exist" containerID="d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8" Dec 03 14:00:31.374351 master-0 kubenswrapper[16176]: I1203 14:00:31.371635 16176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8"} err="failed to get container status \"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8\": rpc error: code = NotFound desc = could not find container \"d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8\": container with ID starting with d29b346661441ded777b8e18ca11e8b0664c168a23aa8e7dfe7cf782bf4324d8 not found: ID does not exist" Dec 03 14:00:31.431557 master-0 kubenswrapper[16176]: I1203 14:00:31.430970 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:00:31.434142 master-0 kubenswrapper[16176]: I1203 14:00:31.433704 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:31.461684 master-0 kubenswrapper[16176]: I1203 14:00:31.461006 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-54f97f57-rr9px" podStartSLOduration=266.590134803 podStartE2EDuration="4m33.460962726s" podCreationTimestamp="2025-12-03 13:55:58 +0000 UTC" firstStartedPulling="2025-12-03 14:00:23.333701979 +0000 UTC m=+113.759342641" lastFinishedPulling="2025-12-03 14:00:30.204529902 +0000 UTC m=+120.630170564" observedRunningTime="2025-12-03 14:00:31.440041768 +0000 UTC m=+121.865682440" watchObservedRunningTime="2025-12-03 14:00:31.460962726 +0000 UTC m=+121.886603388" Dec 03 14:00:31.463827 master-0 kubenswrapper[16176]: I1203 14:00:31.463776 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl"] Dec 03 14:00:31.535617 master-0 kubenswrapper[16176]: I1203 14:00:31.535140 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 14:00:31.535617 master-0 kubenswrapper[16176]: I1203 14:00:31.535216 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d8fb964c9-v2h98"] Dec 03 14:00:31.571497 master-0 kubenswrapper[16176]: I1203 14:00:31.571432 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-77df56447c-vsrxx"] Dec 03 14:00:31.603687 master-0 kubenswrapper[16176]: W1203 14:00:31.603595 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8dc6511_7339_4269_9d43_14ce53bb4e7f.slice/crio-5067e8f96a60c56b169fd67eb47dc8e3777533848a6ec95909ae9ce0a0b5bd8d WatchSource:0}: Error finding container 5067e8f96a60c56b169fd67eb47dc8e3777533848a6ec95909ae9ce0a0b5bd8d: Status 404 returned error can't find the container with id 5067e8f96a60c56b169fd67eb47dc8e3777533848a6ec95909ae9ce0a0b5bd8d Dec 03 14:00:31.678347 master-0 kubenswrapper[16176]: I1203 14:00:31.678236 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-565bdcb8-477pk"] Dec 03 14:00:31.678590 master-0 kubenswrapper[16176]: E1203 14:00:31.678555 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" containerName="controller-manager" Dec 03 14:00:31.678590 master-0 kubenswrapper[16176]: I1203 14:00:31.678573 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" containerName="controller-manager" Dec 03 14:00:31.678749 master-0 kubenswrapper[16176]: I1203 14:00:31.678726 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" containerName="controller-manager" Dec 03 14:00:31.680030 master-0 kubenswrapper[16176]: I1203 14:00:31.679441 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.683581 master-0 kubenswrapper[16176]: I1203 14:00:31.683518 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bdlwz" Dec 03 14:00:31.683731 master-0 kubenswrapper[16176]: I1203 14:00:31.683540 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 03 14:00:31.683915 master-0 kubenswrapper[16176]: I1203 14:00:31.683875 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:00:31.684275 master-0 kubenswrapper[16176]: I1203 14:00:31.683902 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 03 14:00:31.700740 master-0 kubenswrapper[16176]: I1203 14:00:31.699861 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-565bdcb8-477pk"] Dec 03 14:00:31.811894 master-0 kubenswrapper[16176]: I1203 14:00:31.811825 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f23b6d-8303-46d8-892e-8e2c01b567b5" path="/var/lib/kubelet/pods/f5f23b6d-8303-46d8-892e-8e2c01b567b5/volumes" Dec 03 14:00:31.845007 master-0 kubenswrapper[16176]: I1203 14:00:31.844902 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.845007 master-0 kubenswrapper[16176]: I1203 14:00:31.844986 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.845361 master-0 kubenswrapper[16176]: I1203 14:00:31.845052 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.845361 master-0 kubenswrapper[16176]: I1203 14:00:31.845082 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.933808 master-0 kubenswrapper[16176]: I1203 14:00:31.933730 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:00:31.934697 master-0 kubenswrapper[16176]: I1203 14:00:31.934671 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:31.937749 master-0 kubenswrapper[16176]: I1203 14:00:31.937714 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:00:31.938317 master-0 kubenswrapper[16176]: I1203 14:00:31.938209 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:00:31.938463 master-0 kubenswrapper[16176]: I1203 14:00:31.938439 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:00:31.938598 master-0 kubenswrapper[16176]: I1203 14:00:31.938468 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:00:31.938970 master-0 kubenswrapper[16176]: I1203 14:00:31.938944 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:00:31.939064 master-0 kubenswrapper[16176]: I1203 14:00:31.939030 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:00:31.947476 master-0 kubenswrapper[16176]: I1203 14:00:31.947436 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.947683 master-0 kubenswrapper[16176]: I1203 14:00:31.947657 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.947893 master-0 kubenswrapper[16176]: I1203 14:00:31.947867 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.948016 master-0 kubenswrapper[16176]: I1203 14:00:31.947996 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.949605 master-0 kubenswrapper[16176]: I1203 14:00:31.948828 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.953032 master-0 kubenswrapper[16176]: I1203 14:00:31.952644 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.955704 master-0 kubenswrapper[16176]: I1203 14:00:31.955654 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:00:31.958099 master-0 kubenswrapper[16176]: I1203 14:00:31.957937 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:00:31.962777 master-0 kubenswrapper[16176]: I1203 14:00:31.962728 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:31.972207 master-0 kubenswrapper[16176]: I1203 14:00:31.972143 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:32.004045 master-0 kubenswrapper[16176]: I1203 14:00:32.003965 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:00:32.049758 master-0 kubenswrapper[16176]: I1203 14:00:32.049620 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.049758 master-0 kubenswrapper[16176]: I1203 14:00:32.049765 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.050028 master-0 kubenswrapper[16176]: I1203 14:00:32.049804 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.050028 master-0 kubenswrapper[16176]: I1203 14:00:32.049830 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.050028 master-0 kubenswrapper[16176]: I1203 14:00:32.049865 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrwh\" (UniqueName: \"kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.152030 master-0 kubenswrapper[16176]: I1203 14:00:32.151401 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.152030 master-0 kubenswrapper[16176]: I1203 14:00:32.151457 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.152030 master-0 kubenswrapper[16176]: I1203 14:00:32.151497 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.152030 master-0 kubenswrapper[16176]: I1203 14:00:32.151521 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.152030 master-0 kubenswrapper[16176]: I1203 14:00:32.151551 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrwh\" (UniqueName: \"kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.153171 master-0 kubenswrapper[16176]: I1203 14:00:32.152886 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.153238 master-0 kubenswrapper[16176]: I1203 14:00:32.153175 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.156648 master-0 kubenswrapper[16176]: I1203 14:00:32.156590 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.158665 master-0 kubenswrapper[16176]: I1203 14:00:32.158601 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.181372 master-0 kubenswrapper[16176]: I1203 14:00:32.181294 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrwh\" (UniqueName: \"kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh\") pod \"controller-manager-5c8b4c9687-4pxw5\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.263298 master-0 kubenswrapper[16176]: I1203 14:00:32.259991 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:32.300353 master-0 kubenswrapper[16176]: I1203 14:00:32.300240 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:32.305865 master-0 kubenswrapper[16176]: I1203 14:00:32.304048 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:32.389120 master-0 kubenswrapper[16176]: I1203 14:00:32.389003 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"fc86fde0a5d65413e6a1c92cecb6d204dac3c0ea36aa50c66d9cbb60436631fe"} Dec 03 14:00:32.389421 master-0 kubenswrapper[16176]: I1203 14:00:32.389145 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"877432d5f2de2602369a3a7cb506b320c382d202970875ee79a84ac07c222d1f"} Dec 03 14:00:32.399967 master-0 kubenswrapper[16176]: I1203 14:00:32.395347 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"5067e8f96a60c56b169fd67eb47dc8e3777533848a6ec95909ae9ce0a0b5bd8d"} Dec 03 14:00:32.410484 master-0 kubenswrapper[16176]: I1203 14:00:32.409828 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" event={"ID":"3c314fa4-1222-42cf-b87a-f2cd19e67dde","Type":"ContainerStarted","Data":"232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c"} Dec 03 14:00:32.410484 master-0 kubenswrapper[16176]: I1203 14:00:32.409896 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" event={"ID":"3c314fa4-1222-42cf-b87a-f2cd19e67dde","Type":"ContainerStarted","Data":"413d49c6d47f3370a382cdcc3f7759a26fc8734733134ef0a866e80d76355e90"} Dec 03 14:00:32.410878 master-0 kubenswrapper[16176]: I1203 14:00:32.410757 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:32.414341 master-0 kubenswrapper[16176]: I1203 14:00:32.413791 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:00:32.421794 master-0 kubenswrapper[16176]: I1203 14:00:32.420852 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vkpv4" podStartSLOduration=2.420829542 podStartE2EDuration="2.420829542s" podCreationTimestamp="2025-12-03 14:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:32.419017109 +0000 UTC m=+122.844657781" watchObservedRunningTime="2025-12-03 14:00:32.420829542 +0000 UTC m=+122.846470204" Dec 03 14:00:32.449861 master-0 kubenswrapper[16176]: I1203 14:00:32.449478 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-565bdcb8-477pk"] Dec 03 14:00:32.455335 master-0 kubenswrapper[16176]: I1203 14:00:32.449232 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" podStartSLOduration=2.449202008 podStartE2EDuration="2.449202008s" podCreationTimestamp="2025-12-03 14:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:32.445865941 +0000 UTC m=+122.871506623" watchObservedRunningTime="2025-12-03 14:00:32.449202008 +0000 UTC m=+122.874842670" Dec 03 14:00:32.486290 master-0 kubenswrapper[16176]: I1203 14:00:32.485886 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:00:32.591333 master-0 kubenswrapper[16176]: I1203 14:00:32.590974 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:00:32.610240 master-0 kubenswrapper[16176]: W1203 14:00:32.610145 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12822200_5857_4e2a_96bf_31c2d917ae9e.slice/crio-9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8 WatchSource:0}: Error finding container 9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8: Status 404 returned error can't find the container with id 9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8 Dec 03 14:00:33.420437 master-0 kubenswrapper[16176]: I1203 14:00:33.420293 16176 generic.go:334] "Generic (PLEG): container finished" podID="3c314fa4-1222-42cf-b87a-f2cd19e67dde" containerID="232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c" exitCode=0 Dec 03 14:00:33.420437 master-0 kubenswrapper[16176]: I1203 14:00:33.420400 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" event={"ID":"3c314fa4-1222-42cf-b87a-f2cd19e67dde","Type":"ContainerDied","Data":"232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c"} Dec 03 14:00:33.424809 master-0 kubenswrapper[16176]: I1203 14:00:33.424748 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" event={"ID":"12822200-5857-4e2a-96bf-31c2d917ae9e","Type":"ContainerStarted","Data":"813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df"} Dec 03 14:00:33.424896 master-0 kubenswrapper[16176]: I1203 14:00:33.424848 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" event={"ID":"12822200-5857-4e2a-96bf-31c2d917ae9e","Type":"ContainerStarted","Data":"9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8"} Dec 03 14:00:33.425206 master-0 kubenswrapper[16176]: I1203 14:00:33.425170 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:33.433187 master-0 kubenswrapper[16176]: I1203 14:00:33.433125 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:00:33.434667 master-0 kubenswrapper[16176]: I1203 14:00:33.434608 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"41d0a87ba9075444236ec99500a86598e9faf90d6dd911edfef9a09febf55f73"} Dec 03 14:00:33.465067 master-0 kubenswrapper[16176]: I1203 14:00:33.464969 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" podStartSLOduration=3.4649381679999998 podStartE2EDuration="3.464938168s" podCreationTimestamp="2025-12-03 14:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:33.463849906 +0000 UTC m=+123.889490588" watchObservedRunningTime="2025-12-03 14:00:33.464938168 +0000 UTC m=+123.890578820" Dec 03 14:00:33.609090 master-0 kubenswrapper[16176]: I1203 14:00:33.608574 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-pvrfs"] Dec 03 14:00:33.610044 master-0 kubenswrapper[16176]: I1203 14:00:33.610017 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.613986 master-0 kubenswrapper[16176]: I1203 14:00:33.613924 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:00:33.613986 master-0 kubenswrapper[16176]: I1203 14:00:33.613959 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:00:33.691240 master-0 kubenswrapper[16176]: I1203 14:00:33.691073 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.691240 master-0 kubenswrapper[16176]: I1203 14:00:33.691193 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.691240 master-0 kubenswrapper[16176]: I1203 14:00:33.691227 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.792963 master-0 kubenswrapper[16176]: I1203 14:00:33.792897 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.793317 master-0 kubenswrapper[16176]: I1203 14:00:33.792977 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.793317 master-0 kubenswrapper[16176]: I1203 14:00:33.793035 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.798086 master-0 kubenswrapper[16176]: I1203 14:00:33.798053 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.803556 master-0 kubenswrapper[16176]: I1203 14:00:33.801771 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.811889 master-0 kubenswrapper[16176]: I1203 14:00:33.811749 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.936081 master-0 kubenswrapper[16176]: I1203 14:00:33.936003 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:00:33.947492 master-0 kubenswrapper[16176]: I1203 14:00:33.947333 16176 patch_prober.go:28] interesting pod/route-controller-manager-6fcd4b8856-ztns6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:00:33.947672 master-0 kubenswrapper[16176]: I1203 14:00:33.947466 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:00:33.970731 master-0 kubenswrapper[16176]: W1203 14:00:33.970515 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeecc43f5_708f_4395_98cc_696b243d6321.slice/crio-0a499fcaace94995b8933a958dd035af9340ead9f4ba78b7f1b2c1c086355ca2 WatchSource:0}: Error finding container 0a499fcaace94995b8933a958dd035af9340ead9f4ba78b7f1b2c1c086355ca2: Status 404 returned error can't find the container with id 0a499fcaace94995b8933a958dd035af9340ead9f4ba78b7f1b2c1c086355ca2 Dec 03 14:00:34.443983 master-0 kubenswrapper[16176]: I1203 14:00:34.443888 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"416059ff0cc777d3d9d6dfaa42a36a430fbc17cbb4e53827d4fbb502c6e34e99"} Dec 03 14:00:34.443983 master-0 kubenswrapper[16176]: I1203 14:00:34.443974 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"0a499fcaace94995b8933a958dd035af9340ead9f4ba78b7f1b2c1c086355ca2"} Dec 03 14:00:34.768944 master-0 kubenswrapper[16176]: I1203 14:00:34.766516 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:00:34.768944 master-0 kubenswrapper[16176]: I1203 14:00:34.767468 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.771758 master-0 kubenswrapper[16176]: I1203 14:00:34.771500 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nqkqh" Dec 03 14:00:34.771758 master-0 kubenswrapper[16176]: I1203 14:00:34.771576 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 14:00:34.771887 master-0 kubenswrapper[16176]: I1203 14:00:34.771787 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 14:00:34.771887 master-0 kubenswrapper[16176]: I1203 14:00:34.771799 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781003 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781224 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781349 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781384 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781689 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.781779 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.782123 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 14:00:34.783930 master-0 kubenswrapper[16176]: I1203 14:00:34.782123 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 14:00:34.793072 master-0 kubenswrapper[16176]: I1203 14:00:34.792498 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:00:34.793072 master-0 kubenswrapper[16176]: I1203 14:00:34.792625 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818168 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqf6d\" (UniqueName: \"kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818234 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818295 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818318 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818343 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818378 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818394 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818424 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818450 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818470 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818504 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818524 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.819336 master-0 kubenswrapper[16176]: I1203 14:00:34.818546 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.820700 master-0 kubenswrapper[16176]: I1203 14:00:34.820655 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921294 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921393 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921444 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921472 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921499 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921534 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.921562 master-0 kubenswrapper[16176]: I1203 14:00:34.921566 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921597 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921635 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqf6d\" (UniqueName: \"kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921669 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921711 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921735 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921764 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.921867 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.922755 master-0 kubenswrapper[16176]: I1203 14:00:34.922437 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.923465 master-0 kubenswrapper[16176]: I1203 14:00:34.923009 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.925618 master-0 kubenswrapper[16176]: I1203 14:00:34.925542 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.926431 master-0 kubenswrapper[16176]: I1203 14:00:34.926216 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.926725 master-0 kubenswrapper[16176]: I1203 14:00:34.926569 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.927290 master-0 kubenswrapper[16176]: I1203 14:00:34.927210 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.928774 master-0 kubenswrapper[16176]: I1203 14:00:34.928137 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.929686 master-0 kubenswrapper[16176]: I1203 14:00:34.929647 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.931594 master-0 kubenswrapper[16176]: I1203 14:00:34.931553 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.931943 master-0 kubenswrapper[16176]: I1203 14:00:34.931914 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.932422 master-0 kubenswrapper[16176]: I1203 14:00:34.932354 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:34.947408 master-0 kubenswrapper[16176]: I1203 14:00:34.947337 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqf6d\" (UniqueName: \"kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d\") pod \"oauth-openshift-79f7f4d988-pxd4d\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:35.118523 master-0 kubenswrapper[16176]: I1203 14:00:35.118457 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:38.384302 master-0 kubenswrapper[16176]: I1203 14:00:38.383381 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:38.474798 master-0 kubenswrapper[16176]: I1203 14:00:38.474728 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" event={"ID":"3c314fa4-1222-42cf-b87a-f2cd19e67dde","Type":"ContainerDied","Data":"413d49c6d47f3370a382cdcc3f7759a26fc8734733134ef0a866e80d76355e90"} Dec 03 14:00:38.474798 master-0 kubenswrapper[16176]: I1203 14:00:38.474788 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="413d49c6d47f3370a382cdcc3f7759a26fc8734733134ef0a866e80d76355e90" Dec 03 14:00:38.475344 master-0 kubenswrapper[16176]: I1203 14:00:38.474862 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:00:38.476491 master-0 kubenswrapper[16176]: I1203 14:00:38.476440 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume\") pod \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " Dec 03 14:00:38.476713 master-0 kubenswrapper[16176]: I1203 14:00:38.476681 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume\") pod \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " Dec 03 14:00:38.476809 master-0 kubenswrapper[16176]: I1203 14:00:38.476783 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zllx\" (UniqueName: \"kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx\") pod \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\" (UID: \"3c314fa4-1222-42cf-b87a-f2cd19e67dde\") " Dec 03 14:00:38.477476 master-0 kubenswrapper[16176]: I1203 14:00:38.477437 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume" (OuterVolumeSpecName: "config-volume") pod "3c314fa4-1222-42cf-b87a-f2cd19e67dde" (UID: "3c314fa4-1222-42cf-b87a-f2cd19e67dde"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:38.482212 master-0 kubenswrapper[16176]: I1203 14:00:38.482138 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx" (OuterVolumeSpecName: "kube-api-access-4zllx") pod "3c314fa4-1222-42cf-b87a-f2cd19e67dde" (UID: "3c314fa4-1222-42cf-b87a-f2cd19e67dde"). InnerVolumeSpecName "kube-api-access-4zllx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:38.487745 master-0 kubenswrapper[16176]: I1203 14:00:38.487677 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3c314fa4-1222-42cf-b87a-f2cd19e67dde" (UID: "3c314fa4-1222-42cf-b87a-f2cd19e67dde"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:00:38.578217 master-0 kubenswrapper[16176]: I1203 14:00:38.578148 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zllx\" (UniqueName: \"kubernetes.io/projected/3c314fa4-1222-42cf-b87a-f2cd19e67dde-kube-api-access-4zllx\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:38.578217 master-0 kubenswrapper[16176]: I1203 14:00:38.578206 16176 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3c314fa4-1222-42cf-b87a-f2cd19e67dde-secret-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:38.578217 master-0 kubenswrapper[16176]: I1203 14:00:38.578223 16176 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c314fa4-1222-42cf-b87a-f2cd19e67dde-config-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:38.654595 master-0 kubenswrapper[16176]: I1203 14:00:38.638224 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-pvrfs" podStartSLOduration=5.638188334 podStartE2EDuration="5.638188334s" podCreationTimestamp="2025-12-03 14:00:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:38.397187163 +0000 UTC m=+128.822827825" watchObservedRunningTime="2025-12-03 14:00:38.638188334 +0000 UTC m=+129.063828996" Dec 03 14:00:39.484018 master-0 kubenswrapper[16176]: I1203 14:00:39.483950 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"2a09165acb6a766506a8fe7bf33d8e34418e6c5b87698b5a708a45feb615a317"} Dec 03 14:00:39.486389 master-0 kubenswrapper[16176]: I1203 14:00:39.486332 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"d4e51eb8e51007bc54001295feff7c232e1b03f10ddff2ae3a464ceef9c2aa28"} Dec 03 14:00:39.486957 master-0 kubenswrapper[16176]: I1203 14:00:39.486888 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:39.490891 master-0 kubenswrapper[16176]: I1203 14:00:39.490840 16176 patch_prober.go:28] interesting pod/console-operator-77df56447c-vsrxx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/readyz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Dec 03 14:00:39.491037 master-0 kubenswrapper[16176]: I1203 14:00:39.490923 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.75:8443/readyz\": dial tcp 10.128.0.75:8443: connect: connection refused" Dec 03 14:00:39.511395 master-0 kubenswrapper[16176]: I1203 14:00:39.511302 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podStartSLOduration=1.816368426 podStartE2EDuration="9.511251725s" podCreationTimestamp="2025-12-03 14:00:30 +0000 UTC" firstStartedPulling="2025-12-03 14:00:31.606594963 +0000 UTC m=+122.032235625" lastFinishedPulling="2025-12-03 14:00:39.301478262 +0000 UTC m=+129.727118924" observedRunningTime="2025-12-03 14:00:39.510801932 +0000 UTC m=+129.936442604" watchObservedRunningTime="2025-12-03 14:00:39.511251725 +0000 UTC m=+129.936892387" Dec 03 14:00:39.698942 master-0 kubenswrapper[16176]: I1203 14:00:39.698871 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:00:39.714055 master-0 kubenswrapper[16176]: W1203 14:00:39.713266 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbea7d8b9_2778_469b_9f91_fffbf7de5e68.slice/crio-603f5c42cfe32f9047c9208f62023807c8910157b24b27bfb3041b1af52546c8 WatchSource:0}: Error finding container 603f5c42cfe32f9047c9208f62023807c8910157b24b27bfb3041b1af52546c8: Status 404 returned error can't find the container with id 603f5c42cfe32f9047c9208f62023807c8910157b24b27bfb3041b1af52546c8 Dec 03 14:00:40.171113 master-0 kubenswrapper[16176]: I1203 14:00:40.170930 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-6f5db8559b-96ljh"] Dec 03 14:00:40.171396 master-0 kubenswrapper[16176]: E1203 14:00:40.171331 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c314fa4-1222-42cf-b87a-f2cd19e67dde" containerName="collect-profiles" Dec 03 14:00:40.171396 master-0 kubenswrapper[16176]: I1203 14:00:40.171353 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c314fa4-1222-42cf-b87a-f2cd19e67dde" containerName="collect-profiles" Dec 03 14:00:40.171606 master-0 kubenswrapper[16176]: I1203 14:00:40.171561 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c314fa4-1222-42cf-b87a-f2cd19e67dde" containerName="collect-profiles" Dec 03 14:00:40.172471 master-0 kubenswrapper[16176]: I1203 14:00:40.172440 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:00:40.176180 master-0 kubenswrapper[16176]: I1203 14:00:40.176103 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-2blfd" Dec 03 14:00:40.178204 master-0 kubenswrapper[16176]: I1203 14:00:40.178168 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 14:00:40.178318 master-0 kubenswrapper[16176]: I1203 14:00:40.178235 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 14:00:40.196698 master-0 kubenswrapper[16176]: I1203 14:00:40.196632 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6f5db8559b-96ljh"] Dec 03 14:00:40.208927 master-0 kubenswrapper[16176]: I1203 14:00:40.208844 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:00:40.310597 master-0 kubenswrapper[16176]: I1203 14:00:40.310516 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:00:40.332683 master-0 kubenswrapper[16176]: I1203 14:00:40.332603 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:00:40.494406 master-0 kubenswrapper[16176]: I1203 14:00:40.494199 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" event={"ID":"bea7d8b9-2778-469b-9f91-fffbf7de5e68","Type":"ContainerStarted","Data":"603f5c42cfe32f9047c9208f62023807c8910157b24b27bfb3041b1af52546c8"} Dec 03 14:00:40.496782 master-0 kubenswrapper[16176]: I1203 14:00:40.496732 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"ab5e3a3e803ad2fa9e552b5c448abfac370df50c48257b25a5dcf38408830685"} Dec 03 14:00:40.498846 master-0 kubenswrapper[16176]: I1203 14:00:40.498760 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:00:40.501954 master-0 kubenswrapper[16176]: I1203 14:00:40.501920 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:00:40.532812 master-0 kubenswrapper[16176]: I1203 14:00:40.532726 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podStartSLOduration=2.713684715 podStartE2EDuration="9.532694151s" podCreationTimestamp="2025-12-03 14:00:31 +0000 UTC" firstStartedPulling="2025-12-03 14:00:32.469492938 +0000 UTC m=+122.895133600" lastFinishedPulling="2025-12-03 14:00:39.288502374 +0000 UTC m=+129.714143036" observedRunningTime="2025-12-03 14:00:40.522161935 +0000 UTC m=+130.947802597" watchObservedRunningTime="2025-12-03 14:00:40.532694151 +0000 UTC m=+130.958334813" Dec 03 14:00:40.962678 master-0 kubenswrapper[16176]: I1203 14:00:40.962576 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-6f5db8559b-96ljh"] Dec 03 14:00:40.978829 master-0 kubenswrapper[16176]: W1203 14:00:40.978729 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6dd61097_7ea1_4d1d_9d4d_a781a0a59e7d.slice/crio-a6994998929145baeec8b4ca0ca00a353dd316ef5f133c5b4fe7cbeec2b7473c WatchSource:0}: Error finding container a6994998929145baeec8b4ca0ca00a353dd316ef5f133c5b4fe7cbeec2b7473c: Status 404 returned error can't find the container with id a6994998929145baeec8b4ca0ca00a353dd316ef5f133c5b4fe7cbeec2b7473c Dec 03 14:00:41.505967 master-0 kubenswrapper[16176]: I1203 14:00:41.505917 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6fcd4b8856-ztns6_ecc68b17-9112-471d-89f9-15bf30dfa004/route-controller-manager/0.log" Dec 03 14:00:41.506542 master-0 kubenswrapper[16176]: I1203 14:00:41.505980 16176 generic.go:334] "Generic (PLEG): container finished" podID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerID="f95667957e520f9243348981a363f18d6f40c97711a492658710a77286524bca" exitCode=137 Dec 03 14:00:41.506542 master-0 kubenswrapper[16176]: I1203 14:00:41.506067 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" event={"ID":"ecc68b17-9112-471d-89f9-15bf30dfa004","Type":"ContainerDied","Data":"f95667957e520f9243348981a363f18d6f40c97711a492658710a77286524bca"} Dec 03 14:00:41.506542 master-0 kubenswrapper[16176]: I1203 14:00:41.506205 16176 scope.go:117] "RemoveContainer" containerID="f95667957e520f9243348981a363f18d6f40c97711a492658710a77286524bca" Dec 03 14:00:41.507858 master-0 kubenswrapper[16176]: I1203 14:00:41.507589 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"a6994998929145baeec8b4ca0ca00a353dd316ef5f133c5b4fe7cbeec2b7473c"} Dec 03 14:00:41.781580 master-0 kubenswrapper[16176]: I1203 14:00:41.781456 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 14:00:41.934313 master-0 kubenswrapper[16176]: I1203 14:00:41.934231 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") pod \"ecc68b17-9112-471d-89f9-15bf30dfa004\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " Dec 03 14:00:41.934607 master-0 kubenswrapper[16176]: I1203 14:00:41.934342 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") pod \"ecc68b17-9112-471d-89f9-15bf30dfa004\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " Dec 03 14:00:41.934607 master-0 kubenswrapper[16176]: I1203 14:00:41.934409 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") pod \"ecc68b17-9112-471d-89f9-15bf30dfa004\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " Dec 03 14:00:41.934607 master-0 kubenswrapper[16176]: I1203 14:00:41.934457 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") pod \"ecc68b17-9112-471d-89f9-15bf30dfa004\" (UID: \"ecc68b17-9112-471d-89f9-15bf30dfa004\") " Dec 03 14:00:41.935414 master-0 kubenswrapper[16176]: I1203 14:00:41.935348 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca" (OuterVolumeSpecName: "client-ca") pod "ecc68b17-9112-471d-89f9-15bf30dfa004" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:41.935919 master-0 kubenswrapper[16176]: I1203 14:00:41.935855 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config" (OuterVolumeSpecName: "config") pod "ecc68b17-9112-471d-89f9-15bf30dfa004" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:00:41.939754 master-0 kubenswrapper[16176]: I1203 14:00:41.939707 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ecc68b17-9112-471d-89f9-15bf30dfa004" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:00:41.941019 master-0 kubenswrapper[16176]: I1203 14:00:41.940718 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk" (OuterVolumeSpecName: "kube-api-access-jpttk") pod "ecc68b17-9112-471d-89f9-15bf30dfa004" (UID: "ecc68b17-9112-471d-89f9-15bf30dfa004"). InnerVolumeSpecName "kube-api-access-jpttk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:00:42.039012 master-0 kubenswrapper[16176]: I1203 14:00:42.037851 16176 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ecc68b17-9112-471d-89f9-15bf30dfa004-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:42.039012 master-0 kubenswrapper[16176]: I1203 14:00:42.037920 16176 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:42.039012 master-0 kubenswrapper[16176]: I1203 14:00:42.037935 16176 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecc68b17-9112-471d-89f9-15bf30dfa004-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:42.039012 master-0 kubenswrapper[16176]: I1203 14:00:42.037947 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpttk\" (UniqueName: \"kubernetes.io/projected/ecc68b17-9112-471d-89f9-15bf30dfa004-kube-api-access-jpttk\") on node \"master-0\" DevicePath \"\"" Dec 03 14:00:42.268134 master-0 kubenswrapper[16176]: I1203 14:00:42.268068 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-b62gf"] Dec 03 14:00:42.269157 master-0 kubenswrapper[16176]: E1203 14:00:42.269132 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.269274 master-0 kubenswrapper[16176]: I1203 14:00:42.269242 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.269553 master-0 kubenswrapper[16176]: I1203 14:00:42.269534 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.269669 master-0 kubenswrapper[16176]: I1203 14:00:42.269656 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.276289 master-0 kubenswrapper[16176]: E1203 14:00:42.269929 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.277291 master-0 kubenswrapper[16176]: I1203 14:00:42.277237 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" containerName="route-controller-manager" Dec 03 14:00:42.278818 master-0 kubenswrapper[16176]: I1203 14:00:42.278792 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.284675 master-0 kubenswrapper[16176]: I1203 14:00:42.284621 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:00:42.286420 master-0 kubenswrapper[16176]: I1203 14:00:42.284932 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:00:42.292362 master-0 kubenswrapper[16176]: I1203 14:00:42.292223 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml"] Dec 03 14:00:42.293926 master-0 kubenswrapper[16176]: I1203 14:00:42.293871 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.296625 master-0 kubenswrapper[16176]: I1203 14:00:42.296518 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg"] Dec 03 14:00:42.297854 master-0 kubenswrapper[16176]: I1203 14:00:42.297791 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.303149 master-0 kubenswrapper[16176]: I1203 14:00:42.302673 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 03 14:00:42.303149 master-0 kubenswrapper[16176]: I1203 14:00:42.302711 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 03 14:00:42.303149 master-0 kubenswrapper[16176]: I1203 14:00:42.302914 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 03 14:00:42.303149 master-0 kubenswrapper[16176]: I1203 14:00:42.303048 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 03 14:00:42.313503 master-0 kubenswrapper[16176]: I1203 14:00:42.313453 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 03 14:00:42.315146 master-0 kubenswrapper[16176]: I1203 14:00:42.315078 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg"] Dec 03 14:00:42.327214 master-0 kubenswrapper[16176]: I1203 14:00:42.327162 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml"] Dec 03 14:00:42.444203 master-0 kubenswrapper[16176]: I1203 14:00:42.444121 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.444203 master-0 kubenswrapper[16176]: I1203 14:00:42.444209 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444234 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444278 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444300 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444330 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444359 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444392 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444413 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444438 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444462 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444481 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444504 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444527 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.444565 master-0 kubenswrapper[16176]: I1203 14:00:42.444557 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.445409 master-0 kubenswrapper[16176]: I1203 14:00:42.444745 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.445409 master-0 kubenswrapper[16176]: I1203 14:00:42.444922 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.445409 master-0 kubenswrapper[16176]: I1203 14:00:42.445044 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.529814 master-0 kubenswrapper[16176]: I1203 14:00:42.529302 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" event={"ID":"ecc68b17-9112-471d-89f9-15bf30dfa004","Type":"ContainerDied","Data":"53b64d2d94429cb39c687c42e7382e7a8cf7a47e728648b61e261de8268f7a82"} Dec 03 14:00:42.529814 master-0 kubenswrapper[16176]: I1203 14:00:42.529387 16176 scope.go:117] "RemoveContainer" containerID="198e100a87c3ebf3cf56cbb72aee221b10d3d9da6179d5cd2d009567c565ee93" Dec 03 14:00:42.529814 master-0 kubenswrapper[16176]: I1203 14:00:42.529507 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6" Dec 03 14:00:42.543107 master-0 kubenswrapper[16176]: I1203 14:00:42.540854 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" event={"ID":"bea7d8b9-2778-469b-9f91-fffbf7de5e68","Type":"ContainerStarted","Data":"dc65a41ab47ecc33d6e15fa70b631a281a5b5603c8bd7cc62f9b82f52611d9a1"} Dec 03 14:00:42.543107 master-0 kubenswrapper[16176]: I1203 14:00:42.542147 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:42.545857 master-0 kubenswrapper[16176]: I1203 14:00:42.545836 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.545958 master-0 kubenswrapper[16176]: I1203 14:00:42.545940 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.546079 master-0 kubenswrapper[16176]: I1203 14:00:42.546062 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.546160 master-0 kubenswrapper[16176]: I1203 14:00:42.546146 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.546232 master-0 kubenswrapper[16176]: I1203 14:00:42.546220 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.546336 master-0 kubenswrapper[16176]: I1203 14:00:42.546321 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.546429 master-0 kubenswrapper[16176]: I1203 14:00:42.546415 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.546517 master-0 kubenswrapper[16176]: I1203 14:00:42.546502 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.546598 master-0 kubenswrapper[16176]: I1203 14:00:42.546585 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.546682 master-0 kubenswrapper[16176]: I1203 14:00:42.546666 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.546762 master-0 kubenswrapper[16176]: I1203 14:00:42.546750 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.546836 master-0 kubenswrapper[16176]: I1203 14:00:42.546824 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.546914 master-0 kubenswrapper[16176]: I1203 14:00:42.546899 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.546992 master-0 kubenswrapper[16176]: I1203 14:00:42.546979 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.547066 master-0 kubenswrapper[16176]: I1203 14:00:42.547054 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.547136 master-0 kubenswrapper[16176]: I1203 14:00:42.547123 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.547225 master-0 kubenswrapper[16176]: I1203 14:00:42.547213 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.547317 master-0 kubenswrapper[16176]: I1203 14:00:42.547304 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.548395 master-0 kubenswrapper[16176]: I1203 14:00:42.548343 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.548470 master-0 kubenswrapper[16176]: I1203 14:00:42.548444 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.548658 master-0 kubenswrapper[16176]: I1203 14:00:42.548632 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.549023 master-0 kubenswrapper[16176]: E1203 14:00:42.548986 16176 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Dec 03 14:00:42.549079 master-0 kubenswrapper[16176]: I1203 14:00:42.549041 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.549117 master-0 kubenswrapper[16176]: E1203 14:00:42.549066 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:00:43.049046824 +0000 UTC m=+133.474687486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : secret "kube-state-metrics-tls" not found Dec 03 14:00:42.549868 master-0 kubenswrapper[16176]: I1203 14:00:42.549802 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.549927 master-0 kubenswrapper[16176]: I1203 14:00:42.549872 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.550499 master-0 kubenswrapper[16176]: I1203 14:00:42.550465 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.550751 master-0 kubenswrapper[16176]: I1203 14:00:42.550618 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.551001 master-0 kubenswrapper[16176]: I1203 14:00:42.550974 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.551054 master-0 kubenswrapper[16176]: I1203 14:00:42.551015 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.556654 master-0 kubenswrapper[16176]: I1203 14:00:42.555028 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.557393 master-0 kubenswrapper[16176]: I1203 14:00:42.557364 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.557879 master-0 kubenswrapper[16176]: I1203 14:00:42.557836 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.558443 master-0 kubenswrapper[16176]: I1203 14:00:42.558384 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.578293 master-0 kubenswrapper[16176]: I1203 14:00:42.578229 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:42.584064 master-0 kubenswrapper[16176]: I1203 14:00:42.581802 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.589168 master-0 kubenswrapper[16176]: I1203 14:00:42.586938 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.615867 master-0 kubenswrapper[16176]: I1203 14:00:42.615777 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:00:42.616164 master-0 kubenswrapper[16176]: I1203 14:00:42.616031 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" podStartSLOduration=6.480550086 podStartE2EDuration="8.616003762s" podCreationTimestamp="2025-12-03 14:00:34 +0000 UTC" firstStartedPulling="2025-12-03 14:00:39.715661702 +0000 UTC m=+130.141302374" lastFinishedPulling="2025-12-03 14:00:41.851115388 +0000 UTC m=+132.276756050" observedRunningTime="2025-12-03 14:00:42.606097183 +0000 UTC m=+133.031737865" watchObservedRunningTime="2025-12-03 14:00:42.616003762 +0000 UTC m=+133.041644424" Dec 03 14:00:42.644443 master-0 kubenswrapper[16176]: I1203 14:00:42.640671 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:00:42.645849 master-0 kubenswrapper[16176]: I1203 14:00:42.645195 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 14:00:42.650775 master-0 kubenswrapper[16176]: I1203 14:00:42.648715 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fcd4b8856-ztns6"] Dec 03 14:00:42.842548 master-0 kubenswrapper[16176]: I1203 14:00:42.842489 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:00:42.950292 master-0 kubenswrapper[16176]: I1203 14:00:42.949826 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:00:42.954282 master-0 kubenswrapper[16176]: I1203 14:00:42.951106 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:42.954282 master-0 kubenswrapper[16176]: I1203 14:00:42.953875 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:00:42.954282 master-0 kubenswrapper[16176]: I1203 14:00:42.954161 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:00:42.956452 master-0 kubenswrapper[16176]: I1203 14:00:42.955605 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:00:42.973763 master-0 kubenswrapper[16176]: I1203 14:00:42.972162 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:00:42.973763 master-0 kubenswrapper[16176]: I1203 14:00:42.973156 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:00:42.973763 master-0 kubenswrapper[16176]: I1203 14:00:42.973696 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:00:43.030510 master-0 kubenswrapper[16176]: I1203 14:00:43.030373 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:00:43.056872 master-0 kubenswrapper[16176]: I1203 14:00:43.056757 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.056872 master-0 kubenswrapper[16176]: I1203 14:00:43.056852 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cck9v\" (UniqueName: \"kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.057226 master-0 kubenswrapper[16176]: I1203 14:00:43.057007 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.057226 master-0 kubenswrapper[16176]: I1203 14:00:43.057128 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.057503 master-0 kubenswrapper[16176]: I1203 14:00:43.057454 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:43.075959 master-0 kubenswrapper[16176]: I1203 14:00:43.071506 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:43.121729 master-0 kubenswrapper[16176]: I1203 14:00:43.118463 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg"] Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.160170 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.160407 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.160473 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cck9v\" (UniqueName: \"kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.160504 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.163023 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.164036 master-0 kubenswrapper[16176]: I1203 14:00:43.163886 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.169281 master-0 kubenswrapper[16176]: I1203 14:00:43.169114 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.193372 master-0 kubenswrapper[16176]: I1203 14:00:43.193303 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cck9v\" (UniqueName: \"kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v\") pod \"route-controller-manager-84f75d5446-j8tkx\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.264796 master-0 kubenswrapper[16176]: I1203 14:00:43.264739 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:00:43.317976 master-0 kubenswrapper[16176]: I1203 14:00:43.316782 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:43.410987 master-0 kubenswrapper[16176]: I1203 14:00:43.410920 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:00:43.415785 master-0 kubenswrapper[16176]: I1203 14:00:43.415708 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.419734 master-0 kubenswrapper[16176]: I1203 14:00:43.419671 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:00:43.419949 master-0 kubenswrapper[16176]: I1203 14:00:43.419924 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:00:43.420855 master-0 kubenswrapper[16176]: I1203 14:00:43.420068 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:00:43.420855 master-0 kubenswrapper[16176]: I1203 14:00:43.420308 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:00:43.420855 master-0 kubenswrapper[16176]: I1203 14:00:43.420484 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:00:43.420855 master-0 kubenswrapper[16176]: I1203 14:00:43.420733 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:00:43.421026 master-0 kubenswrapper[16176]: I1203 14:00:43.420865 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:00:43.421026 master-0 kubenswrapper[16176]: I1203 14:00:43.421004 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:00:43.442521 master-0 kubenswrapper[16176]: I1203 14:00:43.442189 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:00:43.576208 master-0 kubenswrapper[16176]: I1203 14:00:43.576145 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576222 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576268 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576297 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576321 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576347 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576383 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576403 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576422 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576444 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576487 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.576669 master-0 kubenswrapper[16176]: I1203 14:00:43.576525 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.600411 master-0 kubenswrapper[16176]: I1203 14:00:43.600213 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"5dfbbb0a992a6c3399210f337dd1fc3bad574cdd201a086dd6a45a86b62681a3"} Dec 03 14:00:43.600411 master-0 kubenswrapper[16176]: I1203 14:00:43.600314 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"a6f26673e70c43aa528a4bf60c1a842440861a940ebf1f24a9be5658e255d605"} Dec 03 14:00:43.620624 master-0 kubenswrapper[16176]: I1203 14:00:43.620498 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"75704bf77cfdaa0f95352602d0dd2010ad8d5b6e64879013b8ac6525f0cb85f3"} Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681316 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681426 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681491 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681522 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681556 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681583 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681621 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681652 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681705 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681757 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681791 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.681825 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.683630 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.684899 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.685760 master-0 kubenswrapper[16176]: I1203 14:00:43.685549 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.687977 master-0 kubenswrapper[16176]: I1203 14:00:43.685957 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.702125 master-0 kubenswrapper[16176]: I1203 14:00:43.689636 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.702125 master-0 kubenswrapper[16176]: I1203 14:00:43.689995 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.702125 master-0 kubenswrapper[16176]: I1203 14:00:43.690147 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.702125 master-0 kubenswrapper[16176]: I1203 14:00:43.690664 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.702125 master-0 kubenswrapper[16176]: I1203 14:00:43.691034 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.711467 master-0 kubenswrapper[16176]: I1203 14:00:43.703709 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.711467 master-0 kubenswrapper[16176]: I1203 14:00:43.710688 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.724943 master-0 kubenswrapper[16176]: I1203 14:00:43.715363 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.810729 master-0 kubenswrapper[16176]: I1203 14:00:43.810674 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:00:43.811905 master-0 kubenswrapper[16176]: I1203 14:00:43.811865 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc68b17-9112-471d-89f9-15bf30dfa004" path="/var/lib/kubelet/pods/ecc68b17-9112-471d-89f9-15bf30dfa004/volumes" Dec 03 14:00:43.866038 master-0 kubenswrapper[16176]: I1203 14:00:43.862898 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml"] Dec 03 14:00:43.887403 master-0 kubenswrapper[16176]: W1203 14:00:43.887305 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eee1d96_2f58_41a6_ae51_c158b29fc813.slice/crio-67f3bd1b930cf0b102f5071d35844f4badd99b85d103c3da7e02657771fe2c58 WatchSource:0}: Error finding container 67f3bd1b930cf0b102f5071d35844f4badd99b85d103c3da7e02657771fe2c58: Status 404 returned error can't find the container with id 67f3bd1b930cf0b102f5071d35844f4badd99b85d103c3da7e02657771fe2c58 Dec 03 14:00:44.195159 master-0 kubenswrapper[16176]: I1203 14:00:44.194845 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:00:44.472562 master-0 kubenswrapper[16176]: I1203 14:00:44.472438 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:00:44.516277 master-0 kubenswrapper[16176]: W1203 14:00:44.515561 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff21a9a5_706f_4c71_bd0c_5586374f819a.slice/crio-ba6ff35860ed4340a1767dc68e1d25ce59ff5b8f5bd153f49322cd7314c825e3 WatchSource:0}: Error finding container ba6ff35860ed4340a1767dc68e1d25ce59ff5b8f5bd153f49322cd7314c825e3: Status 404 returned error can't find the container with id ba6ff35860ed4340a1767dc68e1d25ce59ff5b8f5bd153f49322cd7314c825e3 Dec 03 14:00:44.601930 master-0 kubenswrapper[16176]: I1203 14:00:44.601322 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-cc996c4bd-j4hzr"] Dec 03 14:00:44.611241 master-0 kubenswrapper[16176]: I1203 14:00:44.611156 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.622807 master-0 kubenswrapper[16176]: I1203 14:00:44.622423 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 03 14:00:44.624184 master-0 kubenswrapper[16176]: I1203 14:00:44.622924 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 03 14:00:44.625612 master-0 kubenswrapper[16176]: I1203 14:00:44.624915 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 03 14:00:44.625612 master-0 kubenswrapper[16176]: I1203 14:00:44.625363 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 03 14:00:44.625612 master-0 kubenswrapper[16176]: I1203 14:00:44.625362 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" Dec 03 14:00:44.625612 master-0 kubenswrapper[16176]: I1203 14:00:44.625498 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 03 14:00:44.699148 master-0 kubenswrapper[16176]: I1203 14:00:44.699001 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699148 master-0 kubenswrapper[16176]: I1203 14:00:44.699089 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699501 master-0 kubenswrapper[16176]: I1203 14:00:44.699179 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699501 master-0 kubenswrapper[16176]: I1203 14:00:44.699213 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699501 master-0 kubenswrapper[16176]: I1203 14:00:44.699238 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699501 master-0 kubenswrapper[16176]: I1203 14:00:44.699464 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699783 master-0 kubenswrapper[16176]: I1203 14:00:44.699744 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.699985 master-0 kubenswrapper[16176]: I1203 14:00:44.699806 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.700289 master-0 kubenswrapper[16176]: I1203 14:00:44.700160 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc996c4bd-j4hzr"] Dec 03 14:00:44.704479 master-0 kubenswrapper[16176]: I1203 14:00:44.704237 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"67f3bd1b930cf0b102f5071d35844f4badd99b85d103c3da7e02657771fe2c58"} Dec 03 14:00:44.719350 master-0 kubenswrapper[16176]: I1203 14:00:44.719207 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"ba6ff35860ed4340a1767dc68e1d25ce59ff5b8f5bd153f49322cd7314c825e3"} Dec 03 14:00:44.734939 master-0 kubenswrapper[16176]: I1203 14:00:44.734875 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"e134bd03f8d94ccc31b157a88dbe27e9d8f8d599da864933d7d0eaca01de317a"} Dec 03 14:00:44.743553 master-0 kubenswrapper[16176]: I1203 14:00:44.743476 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" event={"ID":"f1221807-c515-4995-a085-8fc98f62932f","Type":"ContainerStarted","Data":"8811e14090f8b0635c52b9294640b9ceb5ebd2e9753c457be1775020b2aa73ae"} Dec 03 14:00:44.743911 master-0 kubenswrapper[16176]: I1203 14:00:44.743879 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:44.772295 master-0 kubenswrapper[16176]: I1203 14:00:44.769292 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" podStartSLOduration=14.769190274 podStartE2EDuration="14.769190274s" podCreationTimestamp="2025-12-03 14:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:00:44.766232498 +0000 UTC m=+135.191873190" watchObservedRunningTime="2025-12-03 14:00:44.769190274 +0000 UTC m=+135.194830936" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801539 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801639 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801714 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801746 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801774 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801798 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801831 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.801866 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.805289 master-0 kubenswrapper[16176]: I1203 14:00:44.803273 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.810314 master-0 kubenswrapper[16176]: I1203 14:00:44.807594 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.810314 master-0 kubenswrapper[16176]: I1203 14:00:44.807710 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.810314 master-0 kubenswrapper[16176]: I1203 14:00:44.808496 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.810314 master-0 kubenswrapper[16176]: I1203 14:00:44.809147 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.810314 master-0 kubenswrapper[16176]: I1203 14:00:44.809917 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.831419 master-0 kubenswrapper[16176]: I1203 14:00:44.829528 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.834800 master-0 kubenswrapper[16176]: I1203 14:00:44.834730 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:44.990113 master-0 kubenswrapper[16176]: I1203 14:00:44.989940 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:00:45.556002 master-0 kubenswrapper[16176]: I1203 14:00:45.555928 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:00:45.760348 master-0 kubenswrapper[16176]: I1203 14:00:45.760223 16176 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="38fa7830bfca6dfe2a6e75c242279c82f6c2490f444a5e9b37d9edbc17f1e847" exitCode=0 Dec 03 14:00:45.760348 master-0 kubenswrapper[16176]: I1203 14:00:45.760360 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"38fa7830bfca6dfe2a6e75c242279c82f6c2490f444a5e9b37d9edbc17f1e847"} Dec 03 14:00:45.764238 master-0 kubenswrapper[16176]: I1203 14:00:45.764195 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" event={"ID":"f1221807-c515-4995-a085-8fc98f62932f","Type":"ContainerStarted","Data":"7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f"} Dec 03 14:00:46.074873 master-0 kubenswrapper[16176]: I1203 14:00:46.074806 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:00:46.076425 master-0 kubenswrapper[16176]: I1203 14:00:46.076397 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.080444 master-0 kubenswrapper[16176]: I1203 14:00:46.080388 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 14:00:46.086862 master-0 kubenswrapper[16176]: I1203 14:00:46.086798 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 14:00:46.087122 master-0 kubenswrapper[16176]: I1203 14:00:46.087070 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 14:00:46.087181 master-0 kubenswrapper[16176]: I1203 14:00:46.087135 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 14:00:46.087367 master-0 kubenswrapper[16176]: I1203 14:00:46.087308 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-twpdm" Dec 03 14:00:46.087533 master-0 kubenswrapper[16176]: I1203 14:00:46.087308 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 14:00:46.106316 master-0 kubenswrapper[16176]: I1203 14:00:46.098625 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:00:46.237238 master-0 kubenswrapper[16176]: I1203 14:00:46.236645 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.237238 master-0 kubenswrapper[16176]: I1203 14:00:46.236723 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.237238 master-0 kubenswrapper[16176]: I1203 14:00:46.236854 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.237238 master-0 kubenswrapper[16176]: I1203 14:00:46.237001 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.238060 master-0 kubenswrapper[16176]: I1203 14:00:46.237348 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.238060 master-0 kubenswrapper[16176]: I1203 14:00:46.237511 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339007 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339095 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339147 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339173 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339223 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.339247 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.341295 master-0 kubenswrapper[16176]: I1203 14:00:46.340432 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.342230 master-0 kubenswrapper[16176]: I1203 14:00:46.342089 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.344452 master-0 kubenswrapper[16176]: I1203 14:00:46.343760 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.347402 master-0 kubenswrapper[16176]: I1203 14:00:46.347314 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.354080 master-0 kubenswrapper[16176]: I1203 14:00:46.354042 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.361782 master-0 kubenswrapper[16176]: I1203 14:00:46.361736 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.427226 master-0 kubenswrapper[16176]: I1203 14:00:46.427172 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:00:46.587658 master-0 kubenswrapper[16176]: I1203 14:00:46.587518 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-cc996c4bd-j4hzr"] Dec 03 14:00:46.605691 master-0 kubenswrapper[16176]: W1203 14:00:46.605540 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a12409a_0be3_4023_9df3_a0f091aac8dc.slice/crio-972d607bf39625811396c6ffb85c15ed299a28395b8f6e404b2dd6758c0fe664 WatchSource:0}: Error finding container 972d607bf39625811396c6ffb85c15ed299a28395b8f6e404b2dd6758c0fe664: Status 404 returned error can't find the container with id 972d607bf39625811396c6ffb85c15ed299a28395b8f6e404b2dd6758c0fe664 Dec 03 14:00:46.773341 master-0 kubenswrapper[16176]: I1203 14:00:46.773226 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"972d607bf39625811396c6ffb85c15ed299a28395b8f6e404b2dd6758c0fe664"} Dec 03 14:00:46.776184 master-0 kubenswrapper[16176]: I1203 14:00:46.776140 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"d6c55f6716708ffd9697648df2c909b367e721e7331928d93a6855113e7545e3"} Dec 03 14:00:46.779248 master-0 kubenswrapper[16176]: I1203 14:00:46.779187 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"2d667c8e5534b2ba9a4b068bb037fee023ee61772c5e8bb0ae3dfb586f8f8cf6"} Dec 03 14:00:46.779347 master-0 kubenswrapper[16176]: I1203 14:00:46.779269 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"df7afb54b47f612999216bf1266d421e3dfe58a44fe13dfaccad23838e1be411"} Dec 03 14:00:46.781337 master-0 kubenswrapper[16176]: I1203 14:00:46.781302 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528" exitCode=0 Dec 03 14:00:46.781417 master-0 kubenswrapper[16176]: I1203 14:00:46.781375 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528"} Dec 03 14:00:46.919561 master-0 kubenswrapper[16176]: I1203 14:00:46.919389 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podStartSLOduration=2.429106181 podStartE2EDuration="4.91935278s" podCreationTimestamp="2025-12-03 14:00:42 +0000 UTC" firstStartedPulling="2025-12-03 14:00:43.634538464 +0000 UTC m=+134.060179126" lastFinishedPulling="2025-12-03 14:00:46.124785063 +0000 UTC m=+136.550425725" observedRunningTime="2025-12-03 14:00:46.911657236 +0000 UTC m=+137.337297918" watchObservedRunningTime="2025-12-03 14:00:46.91935278 +0000 UTC m=+137.344993452" Dec 03 14:00:46.925553 master-0 kubenswrapper[16176]: I1203 14:00:46.925472 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:00:46.963003 master-0 kubenswrapper[16176]: I1203 14:00:46.962895 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-b62gf" podStartSLOduration=3.107467447 podStartE2EDuration="4.962868086s" podCreationTimestamp="2025-12-03 14:00:42 +0000 UTC" firstStartedPulling="2025-12-03 14:00:42.664835012 +0000 UTC m=+133.090475674" lastFinishedPulling="2025-12-03 14:00:44.520235651 +0000 UTC m=+134.945876313" observedRunningTime="2025-12-03 14:00:46.958060796 +0000 UTC m=+137.383701478" watchObservedRunningTime="2025-12-03 14:00:46.962868086 +0000 UTC m=+137.388508748" Dec 03 14:00:47.883083 master-0 kubenswrapper[16176]: I1203 14:00:47.882655 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-555496955b-vpcbs"] Dec 03 14:00:47.883913 master-0 kubenswrapper[16176]: I1203 14:00:47.883892 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:47.887207 master-0 kubenswrapper[16176]: I1203 14:00:47.887066 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 03 14:00:47.887361 master-0 kubenswrapper[16176]: I1203 14:00:47.887299 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 03 14:00:47.887456 master-0 kubenswrapper[16176]: I1203 14:00:47.887289 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 03 14:00:47.887564 master-0 kubenswrapper[16176]: I1203 14:00:47.887503 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 03 14:00:47.887852 master-0 kubenswrapper[16176]: I1203 14:00:47.887673 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bc14vqi7sofg" Dec 03 14:00:47.899771 master-0 kubenswrapper[16176]: I1203 14:00:47.898449 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-555496955b-vpcbs"] Dec 03 14:00:48.073916 master-0 kubenswrapper[16176]: I1203 14:00:48.073861 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074020 master-0 kubenswrapper[16176]: I1203 14:00:48.073938 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074020 master-0 kubenswrapper[16176]: I1203 14:00:48.074004 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074127 master-0 kubenswrapper[16176]: I1203 14:00:48.074080 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074180 master-0 kubenswrapper[16176]: I1203 14:00:48.074124 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074180 master-0 kubenswrapper[16176]: I1203 14:00:48.074165 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.074299 master-0 kubenswrapper[16176]: I1203 14:00:48.074199 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176361 master-0 kubenswrapper[16176]: I1203 14:00:48.176287 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176361 master-0 kubenswrapper[16176]: I1203 14:00:48.176365 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176618 master-0 kubenswrapper[16176]: I1203 14:00:48.176391 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176618 master-0 kubenswrapper[16176]: I1203 14:00:48.176417 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176618 master-0 kubenswrapper[16176]: I1203 14:00:48.176443 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176618 master-0 kubenswrapper[16176]: I1203 14:00:48.176469 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.176618 master-0 kubenswrapper[16176]: I1203 14:00:48.176504 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.179143 master-0 kubenswrapper[16176]: I1203 14:00:48.179073 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.179777 master-0 kubenswrapper[16176]: I1203 14:00:48.179713 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.179965 master-0 kubenswrapper[16176]: I1203 14:00:48.179930 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.182803 master-0 kubenswrapper[16176]: I1203 14:00:48.182752 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.182904 master-0 kubenswrapper[16176]: I1203 14:00:48.182815 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.191421 master-0 kubenswrapper[16176]: I1203 14:00:48.191146 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k"] Dec 03 14:00:48.193429 master-0 kubenswrapper[16176]: I1203 14:00:48.193397 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:00:48.196329 master-0 kubenswrapper[16176]: I1203 14:00:48.194898 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.197514 master-0 kubenswrapper[16176]: I1203 14:00:48.196566 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-jmtqw" Dec 03 14:00:48.202509 master-0 kubenswrapper[16176]: I1203 14:00:48.198694 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 03 14:00:48.229824 master-0 kubenswrapper[16176]: I1203 14:00:48.229746 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k"] Dec 03 14:00:48.250963 master-0 kubenswrapper[16176]: I1203 14:00:48.250901 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.380159 master-0 kubenswrapper[16176]: I1203 14:00:48.380086 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:00:48.481569 master-0 kubenswrapper[16176]: I1203 14:00:48.481515 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:00:48.486095 master-0 kubenswrapper[16176]: I1203 14:00:48.486047 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:00:48.513320 master-0 kubenswrapper[16176]: I1203 14:00:48.513225 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:00:48.614296 master-0 kubenswrapper[16176]: I1203 14:00:48.614124 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:00:48.824414 master-0 kubenswrapper[16176]: I1203 14:00:48.824343 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"e1c459dde35a615ba7882fda79fdee2dd95b10a24c239fcce598bdf3ad30914a"} Dec 03 14:00:48.824414 master-0 kubenswrapper[16176]: I1203 14:00:48.824414 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"061547e7d7b4af7eb58e2c7231ae020567b904227a23d4c97a1b77417b710997"} Dec 03 14:00:48.824414 master-0 kubenswrapper[16176]: I1203 14:00:48.824432 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"31a1968fe005da1858b89a5e00cb177cc6f28af82f98edf135f3f1121701bc4c"} Dec 03 14:00:48.839732 master-0 kubenswrapper[16176]: I1203 14:00:48.834233 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerStarted","Data":"9acbf482829ab49a332c56b34a767b6f0494561672802d7431ac202cec026538"} Dec 03 14:00:48.853480 master-0 kubenswrapper[16176]: I1203 14:00:48.853366 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podStartSLOduration=2.839331515 podStartE2EDuration="6.853246712s" podCreationTimestamp="2025-12-03 14:00:42 +0000 UTC" firstStartedPulling="2025-12-03 14:00:43.891119288 +0000 UTC m=+134.316759950" lastFinishedPulling="2025-12-03 14:00:47.905034485 +0000 UTC m=+138.330675147" observedRunningTime="2025-12-03 14:00:48.851781599 +0000 UTC m=+139.277422281" watchObservedRunningTime="2025-12-03 14:00:48.853246712 +0000 UTC m=+139.278887394" Dec 03 14:00:49.505583 master-0 kubenswrapper[16176]: I1203 14:00:49.505441 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-555496955b-vpcbs"] Dec 03 14:00:49.519551 master-0 kubenswrapper[16176]: I1203 14:00:49.517985 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k"] Dec 03 14:00:50.858944 master-0 kubenswrapper[16176]: W1203 14:00:50.858888 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09b7b0c6_47cc_4860_8c78_9583bb5b0a6e.slice/crio-2caac400732abb070f9a89c9d0988a0e2a59bf06f93a0e74b8e31e3776846a66 WatchSource:0}: Error finding container 2caac400732abb070f9a89c9d0988a0e2a59bf06f93a0e74b8e31e3776846a66: Status 404 returned error can't find the container with id 2caac400732abb070f9a89c9d0988a0e2a59bf06f93a0e74b8e31e3776846a66 Dec 03 14:00:50.891984 master-0 kubenswrapper[16176]: I1203 14:00:50.891906 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:00:50.895307 master-0 kubenswrapper[16176]: I1203 14:00:50.895251 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.909676 master-0 kubenswrapper[16176]: I1203 14:00:50.909590 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:00:50.909989 master-0 kubenswrapper[16176]: I1203 14:00:50.909802 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:00:50.909989 master-0 kubenswrapper[16176]: I1203 14:00:50.909950 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:00:50.910101 master-0 kubenswrapper[16176]: I1203 14:00:50.910033 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:00:50.910101 master-0 kubenswrapper[16176]: I1203 14:00:50.910086 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:00:50.910178 master-0 kubenswrapper[16176]: I1203 14:00:50.910157 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:00:50.910328 master-0 kubenswrapper[16176]: I1203 14:00:50.910304 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:00:50.910546 master-0 kubenswrapper[16176]: I1203 14:00:50.910470 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:00:50.912545 master-0 kubenswrapper[16176]: I1203 14:00:50.912498 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:00:50.913068 master-0 kubenswrapper[16176]: I1203 14:00:50.913034 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:00:50.913328 master-0 kubenswrapper[16176]: I1203 14:00:50.913297 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:00:50.916227 master-0 kubenswrapper[16176]: I1203 14:00:50.916188 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:00:50.933305 master-0 kubenswrapper[16176]: I1203 14:00:50.931812 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947646 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947714 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947762 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947790 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947811 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947845 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947901 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947931 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947962 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.947988 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948014 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948038 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948070 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948091 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948113 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948141 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948171 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:50.948520 master-0 kubenswrapper[16176]: I1203 14:00:50.948201 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050489 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050566 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050630 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050664 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050706 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050731 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050791 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050818 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050851 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050873 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050892 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050912 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050936 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050952 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050970 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.050989 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.051013 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.052032 master-0 kubenswrapper[16176]: I1203 14:00:51.051055 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.055651 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.058026 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.058073 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.059075 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.060466 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.060784 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.061071 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.063128 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.065589 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.065961 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.065889 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.067299 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.067932 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.068108 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.068764 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.071145 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.072748 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.089031 master-0 kubenswrapper[16176]: I1203 14:00:51.088210 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.276594 master-0 kubenswrapper[16176]: I1203 14:00:51.276415 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:00:51.716244 master-0 kubenswrapper[16176]: I1203 14:00:51.716118 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr"] Dec 03 14:00:51.718425 master-0 kubenswrapper[16176]: I1203 14:00:51.717676 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.720127 master-0 kubenswrapper[16176]: I1203 14:00:51.720078 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 14:00:51.768542 master-0 kubenswrapper[16176]: I1203 14:00:51.761871 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr"] Dec 03 14:00:51.768542 master-0 kubenswrapper[16176]: I1203 14:00:51.767081 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.768542 master-0 kubenswrapper[16176]: I1203 14:00:51.767139 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.768542 master-0 kubenswrapper[16176]: I1203 14:00:51.767529 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.768542 master-0 kubenswrapper[16176]: I1203 14:00:51.767692 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.869795 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.869949 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.869985 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.870076 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.870105 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"2caac400732abb070f9a89c9d0988a0e2a59bf06f93a0e74b8e31e3776846a66"} Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.870678 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.873309 master-0 kubenswrapper[16176]: I1203 14:00:51.871289 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"49c91907cc146d2ec502ec5478003551cc6f20afa983ac01d8737b990aba1a3a"} Dec 03 14:00:51.876033 master-0 kubenswrapper[16176]: I1203 14:00:51.875642 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.888051 master-0 kubenswrapper[16176]: I1203 14:00:51.887983 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:51.891512 master-0 kubenswrapper[16176]: I1203 14:00:51.891454 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:52.060852 master-0 kubenswrapper[16176]: I1203 14:00:52.060641 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:00:52.889039 master-0 kubenswrapper[16176]: I1203 14:00:52.888949 16176 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:00:52.889039 master-0 kubenswrapper[16176]: I1203 14:00:52.889053 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:00:54.822458 master-0 kubenswrapper[16176]: I1203 14:00:54.822375 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:00:55.060540 master-0 kubenswrapper[16176]: I1203 14:00:55.060482 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:00:55.061713 master-0 kubenswrapper[16176]: I1203 14:00:55.061681 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.074836 master-0 kubenswrapper[16176]: I1203 14:00:55.074640 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 14:00:55.087132 master-0 kubenswrapper[16176]: I1203 14:00:55.087044 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:00:55.225393 master-0 kubenswrapper[16176]: I1203 14:00:55.225309 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225393 master-0 kubenswrapper[16176]: I1203 14:00:55.225370 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225707 master-0 kubenswrapper[16176]: I1203 14:00:55.225419 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225707 master-0 kubenswrapper[16176]: I1203 14:00:55.225463 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225707 master-0 kubenswrapper[16176]: I1203 14:00:55.225488 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225707 master-0 kubenswrapper[16176]: I1203 14:00:55.225519 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.225707 master-0 kubenswrapper[16176]: I1203 14:00:55.225557 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.326829 master-0 kubenswrapper[16176]: I1203 14:00:55.326651 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.326829 master-0 kubenswrapper[16176]: I1203 14:00:55.326761 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.326829 master-0 kubenswrapper[16176]: I1203 14:00:55.326836 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.327149 master-0 kubenswrapper[16176]: I1203 14:00:55.326881 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.327149 master-0 kubenswrapper[16176]: I1203 14:00:55.326903 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.327149 master-0 kubenswrapper[16176]: I1203 14:00:55.326944 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.327149 master-0 kubenswrapper[16176]: I1203 14:00:55.326987 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.327860 master-0 kubenswrapper[16176]: I1203 14:00:55.327826 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.329168 master-0 kubenswrapper[16176]: I1203 14:00:55.329138 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.329249 master-0 kubenswrapper[16176]: I1203 14:00:55.329180 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.329689 master-0 kubenswrapper[16176]: I1203 14:00:55.329624 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.331294 master-0 kubenswrapper[16176]: I1203 14:00:55.331269 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.333050 master-0 kubenswrapper[16176]: I1203 14:00:55.333020 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.347896 master-0 kubenswrapper[16176]: I1203 14:00:55.347839 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.402442 master-0 kubenswrapper[16176]: I1203 14:00:55.402360 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:00:55.901871 master-0 kubenswrapper[16176]: I1203 14:00:55.901761 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"64d5dceadf72600bb81987bbea8267eb17dd40aa940c7390d7f1dc939852ecc4"} Dec 03 14:01:05.775226 master-0 kubenswrapper[16176]: I1203 14:01:05.775120 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:01:06.216957 master-0 kubenswrapper[16176]: I1203 14:01:06.216865 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr"] Dec 03 14:01:09.514687 master-0 kubenswrapper[16176]: I1203 14:01:09.514604 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:09.517089 master-0 kubenswrapper[16176]: I1203 14:01:09.517051 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.522842 master-0 kubenswrapper[16176]: I1203 14:01:09.521722 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:01:09.522842 master-0 kubenswrapper[16176]: I1203 14:01:09.522637 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:01:09.527543 master-0 kubenswrapper[16176]: I1203 14:01:09.527494 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:09.630555 master-0 kubenswrapper[16176]: I1203 14:01:09.629679 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.630555 master-0 kubenswrapper[16176]: I1203 14:01:09.629765 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.630555 master-0 kubenswrapper[16176]: I1203 14:01:09.630466 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.732614 master-0 kubenswrapper[16176]: I1203 14:01:09.732508 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.732909 master-0 kubenswrapper[16176]: I1203 14:01:09.732652 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.732909 master-0 kubenswrapper[16176]: I1203 14:01:09.732715 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.732909 master-0 kubenswrapper[16176]: I1203 14:01:09.732755 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.732909 master-0 kubenswrapper[16176]: I1203 14:01:09.732880 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.752191 master-0 kubenswrapper[16176]: I1203 14:01:09.752152 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access\") pod \"installer-3-master-0\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:09.860077 master-0 kubenswrapper[16176]: I1203 14:01:09.860008 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:11.925094 master-0 kubenswrapper[16176]: I1203 14:01:11.925016 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:01:17.447778 master-0 kubenswrapper[16176]: W1203 14:01:17.447657 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a9c244_f0b3_42e8_ae50_7012c4ecc0ff.slice/crio-d549a0145e91c729823340a5e83638db82314798a6aa894b09b3b9aca0ec51a6 WatchSource:0}: Error finding container d549a0145e91c729823340a5e83638db82314798a6aa894b09b3b9aca0ec51a6: Status 404 returned error can't find the container with id d549a0145e91c729823340a5e83638db82314798a6aa894b09b3b9aca0ec51a6 Dec 03 14:01:17.704363 master-0 kubenswrapper[16176]: I1203 14:01:17.704311 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:18.066972 master-0 kubenswrapper[16176]: I1203 14:01:18.066885 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:01:18.074693 master-0 kubenswrapper[16176]: I1203 14:01:18.074634 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"844c6aabbbe53793b0de8bb4fb9285a5f9d61cfff3bb006dfa5ff29bfaa84b35"} Dec 03 14:01:18.080304 master-0 kubenswrapper[16176]: I1203 14:01:18.080178 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c"} Dec 03 14:01:18.083769 master-0 kubenswrapper[16176]: I1203 14:01:18.083718 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"69c5b46bb9b1e5d26666667f963bb2655152b9986371a4e0b57ae312b0389515"} Dec 03 14:01:18.083895 master-0 kubenswrapper[16176]: I1203 14:01:18.083778 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"d549a0145e91c729823340a5e83638db82314798a6aa894b09b3b9aca0ec51a6"} Dec 03 14:01:18.084670 master-0 kubenswrapper[16176]: I1203 14:01:18.084622 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:01:18.085151 master-0 kubenswrapper[16176]: W1203 14:01:18.085107 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f94ae7_6043_4761_a16b_e0f072b1364b.slice/crio-14efefc056e9d7930162264c89794825b231af42883720a7ff2326beaa8186f8 WatchSource:0}: Error finding container 14efefc056e9d7930162264c89794825b231af42883720a7ff2326beaa8186f8: Status 404 returned error can't find the container with id 14efefc056e9d7930162264c89794825b231af42883720a7ff2326beaa8186f8 Dec 03 14:01:18.087342 master-0 kubenswrapper[16176]: I1203 14:01:18.087280 16176 patch_prober.go:28] interesting pod/packageserver-7c64dd9d8b-49skr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" start-of-body= Dec 03 14:01:18.087616 master-0 kubenswrapper[16176]: I1203 14:01:18.087547 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" Dec 03 14:01:18.092625 master-0 kubenswrapper[16176]: I1203 14:01:18.092551 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"eb1d92aded35f4e70ee705b9c2fa75beb06f733923ba781b39acba9b70fd643f"} Dec 03 14:01:18.093495 master-0 kubenswrapper[16176]: I1203 14:01:18.093370 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:01:18.098531 master-0 kubenswrapper[16176]: I1203 14:01:18.098476 16176 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:01:18.098753 master-0 kubenswrapper[16176]: I1203 14:01:18.098681 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:01:18.102614 master-0 kubenswrapper[16176]: I1203 14:01:18.102563 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a"} Dec 03 14:01:18.104647 master-0 kubenswrapper[16176]: I1203 14:01:18.104604 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"85777c7fd9763007e07cf6b8a5adab3a7194444324941d56bf95568a55ac023e"} Dec 03 14:01:18.125221 master-0 kubenswrapper[16176]: I1203 14:01:18.125148 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"9058f4b410f7256df900169f0b2bf588775e3621f3e9856797584adba5ed0a94"} Dec 03 14:01:18.126310 master-0 kubenswrapper[16176]: I1203 14:01:18.126277 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerStarted","Data":"bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1"} Dec 03 14:01:18.615488 master-0 kubenswrapper[16176]: I1203 14:01:18.615415 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:01:18.623927 master-0 kubenswrapper[16176]: I1203 14:01:18.623857 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:01:18.867630 master-0 kubenswrapper[16176]: I1203 14:01:18.867535 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podStartSLOduration=5.252381284 podStartE2EDuration="31.867506331s" podCreationTimestamp="2025-12-03 14:00:47 +0000 UTC" firstStartedPulling="2025-12-03 14:00:50.868969226 +0000 UTC m=+141.294609888" lastFinishedPulling="2025-12-03 14:01:17.484094273 +0000 UTC m=+167.909734935" observedRunningTime="2025-12-03 14:01:18.863797934 +0000 UTC m=+169.289438616" watchObservedRunningTime="2025-12-03 14:01:18.867506331 +0000 UTC m=+169.293146993" Dec 03 14:01:18.883798 master-0 kubenswrapper[16176]: I1203 14:01:18.883726 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:19.142911 master-0 kubenswrapper[16176]: I1203 14:01:19.142364 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerStarted","Data":"921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a"} Dec 03 14:01:19.143146 master-0 kubenswrapper[16176]: I1203 14:01:19.142917 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerStarted","Data":"14efefc056e9d7930162264c89794825b231af42883720a7ff2326beaa8186f8"} Dec 03 14:01:19.146043 master-0 kubenswrapper[16176]: I1203 14:01:19.146012 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0"} Dec 03 14:01:19.146127 master-0 kubenswrapper[16176]: I1203 14:01:19.146049 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab"} Dec 03 14:01:19.149118 master-0 kubenswrapper[16176]: I1203 14:01:19.149090 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"614fbef2371d69f02abb27d872356dac76b8a71901373650b25ef98e8905fcb1"} Dec 03 14:01:19.149250 master-0 kubenswrapper[16176]: I1203 14:01:19.149124 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"29fc037238de736531b021b6909696c07ee576aafb65e7ef6058ae5092fa824f"} Dec 03 14:01:19.150122 master-0 kubenswrapper[16176]: I1203 14:01:19.150095 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"7289628d-3a90-4603-9a3f-51ea3ba1ff57","Type":"ContainerStarted","Data":"d0cc8d8cf72e11e70a69d75077d321007bb343e50bb7ba961fedf88d056b53ee"} Dec 03 14:01:19.151771 master-0 kubenswrapper[16176]: I1203 14:01:19.151713 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c" exitCode=0 Dec 03 14:01:19.151840 master-0 kubenswrapper[16176]: I1203 14:01:19.151784 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c"} Dec 03 14:01:19.152449 master-0 kubenswrapper[16176]: I1203 14:01:19.152411 16176 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:01:19.152525 master-0 kubenswrapper[16176]: I1203 14:01:19.152474 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:01:19.384313 master-0 kubenswrapper[16176]: I1203 14:01:19.382907 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-6f5db8559b-96ljh" podStartSLOduration=2.6809198 podStartE2EDuration="39.382877605s" podCreationTimestamp="2025-12-03 14:00:40 +0000 UTC" firstStartedPulling="2025-12-03 14:00:40.981585141 +0000 UTC m=+131.407225803" lastFinishedPulling="2025-12-03 14:01:17.683542946 +0000 UTC m=+168.109183608" observedRunningTime="2025-12-03 14:01:19.38270029 +0000 UTC m=+169.808340992" watchObservedRunningTime="2025-12-03 14:01:19.382877605 +0000 UTC m=+169.808518277" Dec 03 14:01:19.450919 master-0 kubenswrapper[16176]: I1203 14:01:19.450785 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podStartSLOduration=4.861374701 podStartE2EDuration="31.45075768s" podCreationTimestamp="2025-12-03 14:00:48 +0000 UTC" firstStartedPulling="2025-12-03 14:00:50.858869222 +0000 UTC m=+141.284509884" lastFinishedPulling="2025-12-03 14:01:17.448252201 +0000 UTC m=+167.873892863" observedRunningTime="2025-12-03 14:01:19.449618707 +0000 UTC m=+169.875259399" watchObservedRunningTime="2025-12-03 14:01:19.45075768 +0000 UTC m=+169.876398342" Dec 03 14:01:19.530823 master-0 kubenswrapper[16176]: I1203 14:01:19.529059 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podStartSLOduration=28.529021957 podStartE2EDuration="28.529021957s" podCreationTimestamp="2025-12-03 14:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:01:19.5246317 +0000 UTC m=+169.950272372" watchObservedRunningTime="2025-12-03 14:01:19.529021957 +0000 UTC m=+169.954662619" Dec 03 14:01:20.152629 master-0 kubenswrapper[16176]: I1203 14:01:20.152531 16176 patch_prober.go:28] interesting pod/packageserver-7c64dd9d8b-49skr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.90:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:01:20.153371 master-0 kubenswrapper[16176]: I1203 14:01:20.152693 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.90:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:01:20.162654 master-0 kubenswrapper[16176]: I1203 14:01:20.162580 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6"} Dec 03 14:01:20.164598 master-0 kubenswrapper[16176]: I1203 14:01:20.164534 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"7289628d-3a90-4603-9a3f-51ea3ba1ff57","Type":"ContainerStarted","Data":"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f"} Dec 03 14:01:20.164670 master-0 kubenswrapper[16176]: I1203 14:01:20.164571 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" containerName="installer" containerID="cri-o://712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f" gracePeriod=30 Dec 03 14:01:20.226039 master-0 kubenswrapper[16176]: I1203 14:01:20.223811 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:01:20.239296 master-0 kubenswrapper[16176]: I1203 14:01:20.239033 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c5d7cd7f9-2hp75" podStartSLOduration=16.400431302 podStartE2EDuration="34.238989782s" podCreationTimestamp="2025-12-03 14:00:46 +0000 UTC" firstStartedPulling="2025-12-03 14:00:47.816809109 +0000 UTC m=+138.242449771" lastFinishedPulling="2025-12-03 14:01:05.655367579 +0000 UTC m=+156.081008251" observedRunningTime="2025-12-03 14:01:20.193490888 +0000 UTC m=+170.619131560" watchObservedRunningTime="2025-12-03 14:01:20.238989782 +0000 UTC m=+170.664630444" Dec 03 14:01:20.239296 master-0 kubenswrapper[16176]: I1203 14:01:20.239182 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-648d88c756-vswh8" podStartSLOduration=25.239177117 podStartE2EDuration="25.239177117s" podCreationTimestamp="2025-12-03 14:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:01:20.238153697 +0000 UTC m=+170.663794369" watchObservedRunningTime="2025-12-03 14:01:20.239177117 +0000 UTC m=+170.664817789" Dec 03 14:01:20.313174 master-0 kubenswrapper[16176]: I1203 14:01:20.313062 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=11.313033186 podStartE2EDuration="11.313033186s" podCreationTimestamp="2025-12-03 14:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:01:20.305208108 +0000 UTC m=+170.730848780" watchObservedRunningTime="2025-12-03 14:01:20.313033186 +0000 UTC m=+170.738673848" Dec 03 14:01:20.501795 master-0 kubenswrapper[16176]: I1203 14:01:20.500820 16176 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:01:20.501795 master-0 kubenswrapper[16176]: I1203 14:01:20.500912 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:01:20.504361 master-0 kubenswrapper[16176]: I1203 14:01:20.504323 16176 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:01:20.504464 master-0 kubenswrapper[16176]: I1203 14:01:20.504378 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:01:20.804289 master-0 kubenswrapper[16176]: I1203 14:01:20.804233 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_7289628d-3a90-4603-9a3f-51ea3ba1ff57/installer/0.log" Dec 03 14:01:20.804503 master-0 kubenswrapper[16176]: I1203 14:01:20.804366 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:20.981929 master-0 kubenswrapper[16176]: I1203 14:01:20.981877 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access\") pod \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " Dec 03 14:01:20.982140 master-0 kubenswrapper[16176]: I1203 14:01:20.981987 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir\") pod \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " Dec 03 14:01:20.982140 master-0 kubenswrapper[16176]: I1203 14:01:20.982020 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock\") pod \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\" (UID: \"7289628d-3a90-4603-9a3f-51ea3ba1ff57\") " Dec 03 14:01:20.982942 master-0 kubenswrapper[16176]: I1203 14:01:20.982900 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7289628d-3a90-4603-9a3f-51ea3ba1ff57" (UID: "7289628d-3a90-4603-9a3f-51ea3ba1ff57"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:20.983003 master-0 kubenswrapper[16176]: I1203 14:01:20.982946 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock" (OuterVolumeSpecName: "var-lock") pod "7289628d-3a90-4603-9a3f-51ea3ba1ff57" (UID: "7289628d-3a90-4603-9a3f-51ea3ba1ff57"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:20.986308 master-0 kubenswrapper[16176]: I1203 14:01:20.986154 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7289628d-3a90-4603-9a3f-51ea3ba1ff57" (UID: "7289628d-3a90-4603-9a3f-51ea3ba1ff57"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:21.084159 master-0 kubenswrapper[16176]: I1203 14:01:21.084104 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:21.084159 master-0 kubenswrapper[16176]: I1203 14:01:21.084150 16176 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:21.084159 master-0 kubenswrapper[16176]: I1203 14:01:21.084165 16176 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7289628d-3a90-4603-9a3f-51ea3ba1ff57-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:21.181485 master-0 kubenswrapper[16176]: I1203 14:01:21.181411 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23"} Dec 03 14:01:21.185133 master-0 kubenswrapper[16176]: I1203 14:01:21.185091 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_7289628d-3a90-4603-9a3f-51ea3ba1ff57/installer/0.log" Dec 03 14:01:21.185432 master-0 kubenswrapper[16176]: I1203 14:01:21.185395 16176 generic.go:334] "Generic (PLEG): container finished" podID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" containerID="712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f" exitCode=2 Dec 03 14:01:21.186209 master-0 kubenswrapper[16176]: I1203 14:01:21.186143 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Dec 03 14:01:21.186377 master-0 kubenswrapper[16176]: I1203 14:01:21.186325 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"7289628d-3a90-4603-9a3f-51ea3ba1ff57","Type":"ContainerDied","Data":"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f"} Dec 03 14:01:21.186449 master-0 kubenswrapper[16176]: I1203 14:01:21.186388 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"7289628d-3a90-4603-9a3f-51ea3ba1ff57","Type":"ContainerDied","Data":"d0cc8d8cf72e11e70a69d75077d321007bb343e50bb7ba961fedf88d056b53ee"} Dec 03 14:01:21.186525 master-0 kubenswrapper[16176]: I1203 14:01:21.186499 16176 scope.go:117] "RemoveContainer" containerID="712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f" Dec 03 14:01:21.240804 master-0 kubenswrapper[16176]: I1203 14:01:21.240719 16176 scope.go:117] "RemoveContainer" containerID="712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f" Dec 03 14:01:21.244908 master-0 kubenswrapper[16176]: E1203 14:01:21.244845 16176 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f\": container with ID starting with 712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f not found: ID does not exist" containerID="712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f" Dec 03 14:01:21.244997 master-0 kubenswrapper[16176]: I1203 14:01:21.244961 16176 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f"} err="failed to get container status \"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f\": rpc error: code = NotFound desc = could not find container \"712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f\": container with ID starting with 712004e4a616f64cba0bc5374de15752f6f6cf8b8d96b32268dee2bbf4b2f51f not found: ID does not exist" Dec 03 14:01:22.888863 master-0 kubenswrapper[16176]: I1203 14:01:22.888792 16176 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:01:22.889396 master-0 kubenswrapper[16176]: I1203 14:01:22.888885 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:01:25.219177 master-0 kubenswrapper[16176]: I1203 14:01:25.219089 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072"} Dec 03 14:01:25.222402 master-0 kubenswrapper[16176]: I1203 14:01:25.222353 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"d68d496052bf3db1426b92297d8fe9b84c2aa7e56373eba2857fc2b0fe99bf8e"} Dec 03 14:01:25.403597 master-0 kubenswrapper[16176]: I1203 14:01:25.403497 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:01:25.403597 master-0 kubenswrapper[16176]: I1203 14:01:25.403615 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:01:25.404794 master-0 kubenswrapper[16176]: I1203 14:01:25.404716 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:01:25.404916 master-0 kubenswrapper[16176]: I1203 14:01:25.404814 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:01:26.243903 master-0 kubenswrapper[16176]: I1203 14:01:26.243786 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Dec 03 14:01:26.244503 master-0 kubenswrapper[16176]: E1203 14:01:26.244202 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" containerName="installer" Dec 03 14:01:26.244503 master-0 kubenswrapper[16176]: I1203 14:01:26.244234 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" containerName="installer" Dec 03 14:01:26.244503 master-0 kubenswrapper[16176]: I1203 14:01:26.244404 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" containerName="installer" Dec 03 14:01:26.245140 master-0 kubenswrapper[16176]: I1203 14:01:26.245117 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.247403 master-0 kubenswrapper[16176]: I1203 14:01:26.247311 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:01:26.247782 master-0 kubenswrapper[16176]: I1203 14:01:26.247310 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:01:26.271586 master-0 kubenswrapper[16176]: I1203 14:01:26.271525 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.271770 master-0 kubenswrapper[16176]: I1203 14:01:26.271597 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.271770 master-0 kubenswrapper[16176]: I1203 14:01:26.271651 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.373874 master-0 kubenswrapper[16176]: I1203 14:01:26.373769 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.373874 master-0 kubenswrapper[16176]: I1203 14:01:26.373865 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.374425 master-0 kubenswrapper[16176]: I1203 14:01:26.373922 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.374425 master-0 kubenswrapper[16176]: I1203 14:01:26.374061 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.374425 master-0 kubenswrapper[16176]: I1203 14:01:26.374064 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:26.428202 master-0 kubenswrapper[16176]: I1203 14:01:26.428115 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:01:26.428496 master-0 kubenswrapper[16176]: I1203 14:01:26.428363 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:01:26.429962 master-0 kubenswrapper[16176]: I1203 14:01:26.429921 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:01:26.430039 master-0 kubenswrapper[16176]: I1203 14:01:26.429968 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:01:28.513785 master-0 kubenswrapper[16176]: I1203 14:01:28.513701 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:01:28.513785 master-0 kubenswrapper[16176]: I1203 14:01:28.513785 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:01:29.258933 master-0 kubenswrapper[16176]: I1203 14:01:29.258752 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"ed89108d17473516cc693d23691659c9c34a6af1456a6ccf3615665c33a745cb"} Dec 03 14:01:30.514438 master-0 kubenswrapper[16176]: I1203 14:01:30.514307 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:01:30.807330 master-0 kubenswrapper[16176]: I1203 14:01:30.807196 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" podUID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" containerName="oauth-openshift" containerID="cri-o://dc65a41ab47ecc33d6e15fa70b631a281a5b5603c8bd7cc62f9b82f52611d9a1" gracePeriod=15 Dec 03 14:01:32.561031 master-0 kubenswrapper[16176]: I1203 14:01:32.560915 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Dec 03 14:01:33.010431 master-0 kubenswrapper[16176]: I1203 14:01:33.008306 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:33.015455 master-0 kubenswrapper[16176]: I1203 14:01:33.015304 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:01:33.032695 master-0 kubenswrapper[16176]: I1203 14:01:33.031019 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:33.077556 master-0 kubenswrapper[16176]: I1203 14:01:33.077501 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Dec 03 14:01:33.137921 master-0 kubenswrapper[16176]: I1203 14:01:33.136165 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4p4zh"] Dec 03 14:01:33.138476 master-0 kubenswrapper[16176]: I1203 14:01:33.138441 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.147139 master-0 kubenswrapper[16176]: I1203 14:01:33.146080 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:01:33.147139 master-0 kubenswrapper[16176]: I1203 14:01:33.146340 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:01:33.172453 master-0 kubenswrapper[16176]: I1203 14:01:33.172394 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:01:33.178740 master-0 kubenswrapper[16176]: I1203 14:01:33.177785 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:01:33.209015 master-0 kubenswrapper[16176]: I1203 14:01:33.208674 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.209015 master-0 kubenswrapper[16176]: I1203 14:01:33.208737 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.209314 master-0 kubenswrapper[16176]: I1203 14:01:33.209108 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.266798 master-0 kubenswrapper[16176]: I1203 14:01:33.265471 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=13.604905092 podStartE2EDuration="50.265431381s" podCreationTimestamp="2025-12-03 14:00:43 +0000 UTC" firstStartedPulling="2025-12-03 14:00:44.521384155 +0000 UTC m=+134.947024817" lastFinishedPulling="2025-12-03 14:01:21.181910444 +0000 UTC m=+171.607551106" observedRunningTime="2025-12-03 14:01:33.263744962 +0000 UTC m=+183.689385634" watchObservedRunningTime="2025-12-03 14:01:33.265431381 +0000 UTC m=+183.691072053" Dec 03 14:01:33.291181 master-0 kubenswrapper[16176]: I1203 14:01:33.291111 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"d4439eb52546785aae37b834477017e795b9034f4b166104f8bb03f8ec8b60b0"} Dec 03 14:01:33.292690 master-0 kubenswrapper[16176]: I1203 14:01:33.292662 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:01:33.306180 master-0 kubenswrapper[16176]: I1203 14:01:33.304638 16176 generic.go:334] "Generic (PLEG): container finished" podID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" containerID="dc65a41ab47ecc33d6e15fa70b631a281a5b5603c8bd7cc62f9b82f52611d9a1" exitCode=0 Dec 03 14:01:33.306180 master-0 kubenswrapper[16176]: I1203 14:01:33.304824 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" event={"ID":"bea7d8b9-2778-469b-9f91-fffbf7de5e68","Type":"ContainerDied","Data":"dc65a41ab47ecc33d6e15fa70b631a281a5b5603c8bd7cc62f9b82f52611d9a1"} Dec 03 14:01:33.310973 master-0 kubenswrapper[16176]: I1203 14:01:33.310902 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.311091 master-0 kubenswrapper[16176]: I1203 14:01:33.310977 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.311091 master-0 kubenswrapper[16176]: I1203 14:01:33.311031 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.311603 master-0 kubenswrapper[16176]: I1203 14:01:33.311569 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.312087 master-0 kubenswrapper[16176]: I1203 14:01:33.312058 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.347940 master-0 kubenswrapper[16176]: I1203 14:01:33.347886 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:01:33.352079 master-0 kubenswrapper[16176]: I1203 14:01:33.352031 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.365639 master-0 kubenswrapper[16176]: I1203 14:01:33.362660 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podStartSLOduration=14.842333296 podStartE2EDuration="49.362639899s" podCreationTimestamp="2025-12-03 14:00:44 +0000 UTC" firstStartedPulling="2025-12-03 14:00:46.608836676 +0000 UTC m=+137.034477338" lastFinishedPulling="2025-12-03 14:01:21.129143279 +0000 UTC m=+171.554783941" observedRunningTime="2025-12-03 14:01:33.360170338 +0000 UTC m=+183.785811010" watchObservedRunningTime="2025-12-03 14:01:33.362639899 +0000 UTC m=+183.788280561" Dec 03 14:01:33.509962 master-0 kubenswrapper[16176]: I1203 14:01:33.509904 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:01:33.547090 master-0 kubenswrapper[16176]: W1203 14:01:33.545901 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7d6a05e_beee_40e9_b376_5c22e285b27a.slice/crio-e68fc905327296d04331757a2dcbf12bf8f35da0108413935698526463dbc474 WatchSource:0}: Error finding container e68fc905327296d04331757a2dcbf12bf8f35da0108413935698526463dbc474: Status 404 returned error can't find the container with id e68fc905327296d04331757a2dcbf12bf8f35da0108413935698526463dbc474 Dec 03 14:01:33.569586 master-0 kubenswrapper[16176]: I1203 14:01:33.569527 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.720812 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.720914 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.720977 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.721003 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqf6d\" (UniqueName: \"kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.721040 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.721080 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.721164 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721292 master-0 kubenswrapper[16176]: I1203 14:01:33.721222 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721989 master-0 kubenswrapper[16176]: I1203 14:01:33.721344 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721989 master-0 kubenswrapper[16176]: I1203 14:01:33.721409 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721989 master-0 kubenswrapper[16176]: I1203 14:01:33.721482 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721989 master-0 kubenswrapper[16176]: I1203 14:01:33.721540 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.721989 master-0 kubenswrapper[16176]: I1203 14:01:33.721591 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template\") pod \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\" (UID: \"bea7d8b9-2778-469b-9f91-fffbf7de5e68\") " Dec 03 14:01:33.726495 master-0 kubenswrapper[16176]: I1203 14:01:33.726407 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.726684 master-0 kubenswrapper[16176]: I1203 14:01:33.726586 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.727520 master-0 kubenswrapper[16176]: I1203 14:01:33.726889 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:01:33.727520 master-0 kubenswrapper[16176]: I1203 14:01:33.726969 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.727520 master-0 kubenswrapper[16176]: I1203 14:01:33.727067 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:01:33.727520 master-0 kubenswrapper[16176]: I1203 14:01:33.727296 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.733149 master-0 kubenswrapper[16176]: I1203 14:01:33.731736 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.733149 master-0 kubenswrapper[16176]: I1203 14:01:33.732400 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:01:33.733149 master-0 kubenswrapper[16176]: I1203 14:01:33.733058 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:01:33.734093 master-0 kubenswrapper[16176]: I1203 14:01:33.733587 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d" (OuterVolumeSpecName: "kube-api-access-xqf6d") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "kube-api-access-xqf6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:01:33.734175 master-0 kubenswrapper[16176]: I1203 14:01:33.734072 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:01:33.736995 master-0 kubenswrapper[16176]: I1203 14:01:33.736930 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.738224 master-0 kubenswrapper[16176]: I1203 14:01:33.737253 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "bea7d8b9-2778-469b-9f91-fffbf7de5e68" (UID: "bea7d8b9-2778-469b-9f91-fffbf7de5e68"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:01:33.772473 master-0 kubenswrapper[16176]: I1203 14:01:33.767414 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-747bdb58b5-mn76f"] Dec 03 14:01:33.772473 master-0 kubenswrapper[16176]: E1203 14:01:33.767883 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" containerName="oauth-openshift" Dec 03 14:01:33.772473 master-0 kubenswrapper[16176]: I1203 14:01:33.767900 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" containerName="oauth-openshift" Dec 03 14:01:33.772473 master-0 kubenswrapper[16176]: I1203 14:01:33.768043 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" containerName="oauth-openshift" Dec 03 14:01:33.772473 master-0 kubenswrapper[16176]: I1203 14:01:33.768665 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.817141 master-0 kubenswrapper[16176]: I1203 14:01:33.817080 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7289628d-3a90-4603-9a3f-51ea3ba1ff57" path="/var/lib/kubelet/pods/7289628d-3a90-4603-9a3f-51ea3ba1ff57/volumes" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823021 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823085 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823122 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823152 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823172 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823224 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823250 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823294 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823571 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823717 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.823788 master-0 kubenswrapper[16176]: I1203 14:01:33.823768 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.823802 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.823825 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824109 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824127 16176 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-policies\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824139 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824151 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqf6d\" (UniqueName: \"kubernetes.io/projected/bea7d8b9-2778-469b-9f91-fffbf7de5e68-kube-api-access-xqf6d\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824161 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824176 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824189 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824201 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824212 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824225 16176 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bea7d8b9-2778-469b-9f91-fffbf7de5e68-audit-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824242 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824280 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.824287 master-0 kubenswrapper[16176]: I1203 14:01:33.824299 16176 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bea7d8b9-2778-469b-9f91-fffbf7de5e68-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Dec 03 14:01:33.855929 master-0 kubenswrapper[16176]: I1203 14:01:33.855009 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Dec 03 14:01:33.865994 master-0 kubenswrapper[16176]: W1203 14:01:33.864928 16176 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0b1e0884_ff54_419b_90d3_25f561a6391d.slice/crio-528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b WatchSource:0}: Error finding container 528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b: Status 404 returned error can't find the container with id 528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b Dec 03 14:01:33.927162 master-0 kubenswrapper[16176]: I1203 14:01:33.927060 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927162 master-0 kubenswrapper[16176]: I1203 14:01:33.927153 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927384 master-0 kubenswrapper[16176]: I1203 14:01:33.927186 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927384 master-0 kubenswrapper[16176]: I1203 14:01:33.927223 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927384 master-0 kubenswrapper[16176]: I1203 14:01:33.927329 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927384 master-0 kubenswrapper[16176]: I1203 14:01:33.927363 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927556 master-0 kubenswrapper[16176]: I1203 14:01:33.927398 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927556 master-0 kubenswrapper[16176]: I1203 14:01:33.927435 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927556 master-0 kubenswrapper[16176]: I1203 14:01:33.927459 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927556 master-0 kubenswrapper[16176]: I1203 14:01:33.927522 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927556 master-0 kubenswrapper[16176]: I1203 14:01:33.927551 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927761 master-0 kubenswrapper[16176]: I1203 14:01:33.927579 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.927761 master-0 kubenswrapper[16176]: I1203 14:01:33.927612 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.928836 master-0 kubenswrapper[16176]: I1203 14:01:33.928606 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.932544 master-0 kubenswrapper[16176]: I1203 14:01:33.930054 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.932544 master-0 kubenswrapper[16176]: I1203 14:01:33.930076 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.932544 master-0 kubenswrapper[16176]: I1203 14:01:33.930128 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.932544 master-0 kubenswrapper[16176]: I1203 14:01:33.931891 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.932885 master-0 kubenswrapper[16176]: I1203 14:01:33.932842 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.933251 master-0 kubenswrapper[16176]: I1203 14:01:33.933206 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.933542 master-0 kubenswrapper[16176]: I1203 14:01:33.933490 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.935574 master-0 kubenswrapper[16176]: I1203 14:01:33.935539 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.936970 master-0 kubenswrapper[16176]: I1203 14:01:33.936932 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.937122 master-0 kubenswrapper[16176]: I1203 14:01:33.937068 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:33.938255 master-0 kubenswrapper[16176]: I1203 14:01:33.938219 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:34.081912 master-0 kubenswrapper[16176]: I1203 14:01:34.080944 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-747bdb58b5-mn76f"] Dec 03 14:01:34.095657 master-0 kubenswrapper[16176]: I1203 14:01:34.095518 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:34.100967 master-0 kubenswrapper[16176]: I1203 14:01:34.100910 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:01:34.318790 master-0 kubenswrapper[16176]: I1203 14:01:34.318727 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2"} Dec 03 14:01:34.318790 master-0 kubenswrapper[16176]: I1203 14:01:34.318790 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd"} Dec 03 14:01:34.319048 master-0 kubenswrapper[16176]: I1203 14:01:34.318806 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe"} Dec 03 14:01:34.319898 master-0 kubenswrapper[16176]: I1203 14:01:34.319864 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0b1e0884-ff54-419b-90d3-25f561a6391d","Type":"ContainerStarted","Data":"528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b"} Dec 03 14:01:34.322035 master-0 kubenswrapper[16176]: I1203 14:01:34.321952 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"e68fc905327296d04331757a2dcbf12bf8f35da0108413935698526463dbc474"} Dec 03 14:01:34.326625 master-0 kubenswrapper[16176]: I1203 14:01:34.326543 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" event={"ID":"bea7d8b9-2778-469b-9f91-fffbf7de5e68","Type":"ContainerDied","Data":"603f5c42cfe32f9047c9208f62023807c8910157b24b27bfb3041b1af52546c8"} Dec 03 14:01:34.326625 master-0 kubenswrapper[16176]: I1203 14:01:34.326573 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79f7f4d988-pxd4d" Dec 03 14:01:34.326625 master-0 kubenswrapper[16176]: I1203 14:01:34.326606 16176 scope.go:117] "RemoveContainer" containerID="dc65a41ab47ecc33d6e15fa70b631a281a5b5603c8bd7cc62f9b82f52611d9a1" Dec 03 14:01:35.345104 master-0 kubenswrapper[16176]: I1203 14:01:35.345030 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c"} Dec 03 14:01:35.403192 master-0 kubenswrapper[16176]: I1203 14:01:35.403125 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:01:35.403505 master-0 kubenswrapper[16176]: I1203 14:01:35.403196 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:01:36.428930 master-0 kubenswrapper[16176]: I1203 14:01:36.428865 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:01:36.429500 master-0 kubenswrapper[16176]: I1203 14:01:36.428941 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:01:38.373204 master-0 kubenswrapper[16176]: I1203 14:01:38.373137 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0b1e0884-ff54-419b-90d3-25f561a6391d","Type":"ContainerStarted","Data":"5944601d984a89efbdcd280a11d9fec3279923f00b3d3e74a67095fff7358739"} Dec 03 14:01:39.217656 master-0 kubenswrapper[16176]: I1203 14:01:39.217578 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:01:44.423575 master-0 kubenswrapper[16176]: I1203 14:01:44.423482 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856"} Dec 03 14:01:45.403999 master-0 kubenswrapper[16176]: I1203 14:01:45.403927 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:01:45.403999 master-0 kubenswrapper[16176]: I1203 14:01:45.404000 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:01:46.430754 master-0 kubenswrapper[16176]: I1203 14:01:46.430677 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:01:46.431373 master-0 kubenswrapper[16176]: I1203 14:01:46.430799 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:01:46.875855 master-0 kubenswrapper[16176]: E1203 14:01:46.875726 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:01:48.254551 master-0 kubenswrapper[16176]: I1203 14:01:48.245188 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-79f7f4d988-pxd4d"] Dec 03 14:01:48.522886 master-0 kubenswrapper[16176]: I1203 14:01:48.522770 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:01:48.526568 master-0 kubenswrapper[16176]: I1203 14:01:48.526535 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:01:49.801583 master-0 kubenswrapper[16176]: I1203 14:01:49.801490 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bea7d8b9-2778-469b-9f91-fffbf7de5e68" path="/var/lib/kubelet/pods/bea7d8b9-2778-469b-9f91-fffbf7de5e68/volumes" Dec 03 14:01:52.888944 master-0 kubenswrapper[16176]: I1203 14:01:52.888820 16176 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:01:52.888944 master-0 kubenswrapper[16176]: I1203 14:01:52.888912 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:01:52.890315 master-0 kubenswrapper[16176]: I1203 14:01:52.888987 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:01:52.890315 master-0 kubenswrapper[16176]: I1203 14:01:52.889960 16176 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f040ef6a7a2c71d1bb88a8e6c44278311cf99bd34b4ad6f1e2a093046f77970f"} pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:01:52.890315 master-0 kubenswrapper[16176]: I1203 14:01:52.890051 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" containerID="cri-o://f040ef6a7a2c71d1bb88a8e6c44278311cf99bd34b4ad6f1e2a093046f77970f" gracePeriod=600 Dec 03 14:01:54.345108 master-0 kubenswrapper[16176]: I1203 14:01:54.344998 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 14:01:54.922788 master-0 kubenswrapper[16176]: I1203 14:01:54.922667 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 14:01:55.404473 master-0 kubenswrapper[16176]: I1203 14:01:55.404374 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:01:55.404473 master-0 kubenswrapper[16176]: I1203 14:01:55.404465 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:01:56.428534 master-0 kubenswrapper[16176]: I1203 14:01:56.428478 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:01:56.429138 master-0 kubenswrapper[16176]: I1203 14:01:56.428560 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:01:56.520575 master-0 kubenswrapper[16176]: I1203 14:01:56.520501 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2"} Dec 03 14:01:56.876782 master-0 kubenswrapper[16176]: E1203 14:01:56.876695 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:02:02.387616 master-0 kubenswrapper[16176]: I1203 14:02:02.386073 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-747bdb58b5-mn76f"] Dec 03 14:02:02.411310 master-0 kubenswrapper[16176]: E1203 14:02:02.404094 16176 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"master-0\": the object has been modified; please apply your changes to the latest version and try again" Dec 03 14:02:02.576094 master-0 kubenswrapper[16176]: I1203 14:02:02.575981 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"5fa4316b2fd539a1689a7f8bb879371f16aef2391ed911b4881568b5a209ee3e"} Dec 03 14:02:02.578761 master-0 kubenswrapper[16176]: I1203 14:02:02.578687 16176 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" exitCode=1 Dec 03 14:02:02.578761 master-0 kubenswrapper[16176]: I1203 14:02:02.578756 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc"} Dec 03 14:02:02.579011 master-0 kubenswrapper[16176]: I1203 14:02:02.578798 16176 scope.go:117] "RemoveContainer" containerID="d9fcf7c508606bbaf8625771e275b5584558a2a2dd28d23c5aae8ec6c71abe1b" Dec 03 14:02:02.579512 master-0 kubenswrapper[16176]: I1203 14:02:02.579462 16176 scope.go:117] "RemoveContainer" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:02:02.579857 master-0 kubenswrapper[16176]: E1203 14:02:02.579806 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 14:02:02.581954 master-0 kubenswrapper[16176]: I1203 14:02:02.581898 16176 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="f040ef6a7a2c71d1bb88a8e6c44278311cf99bd34b4ad6f1e2a093046f77970f" exitCode=0 Dec 03 14:02:02.582058 master-0 kubenswrapper[16176]: I1203 14:02:02.581974 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"f040ef6a7a2c71d1bb88a8e6c44278311cf99bd34b4ad6f1e2a093046f77970f"} Dec 03 14:02:03.209824 master-0 kubenswrapper[16176]: I1203 14:02:03.209757 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:03.591097 master-0 kubenswrapper[16176]: I1203 14:02:03.591038 16176 scope.go:117] "RemoveContainer" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:02:03.591665 master-0 kubenswrapper[16176]: E1203 14:02:03.591401 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 14:02:03.592139 master-0 kubenswrapper[16176]: I1203 14:02:03.592097 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"2fbde32eb18abfa7b6e72ffc4634a409c0aca270b847e310a795438cc6476311"} Dec 03 14:02:04.345029 master-0 kubenswrapper[16176]: I1203 14:02:04.344952 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:04.602966 master-0 kubenswrapper[16176]: I1203 14:02:04.602778 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"c15591114b17f80d4005fe7b9b93186b1001827c8ad32c7e12c1faa9d0831719"} Dec 03 14:02:04.603483 master-0 kubenswrapper[16176]: I1203 14:02:04.603359 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:02:04.603703 master-0 kubenswrapper[16176]: I1203 14:02:04.603643 16176 scope.go:117] "RemoveContainer" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:02:04.604129 master-0 kubenswrapper[16176]: E1203 14:02:04.604085 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 14:02:04.664643 master-0 kubenswrapper[16176]: I1203 14:02:04.664570 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=43.66454863 podStartE2EDuration="43.66454863s" podCreationTimestamp="2025-12-03 14:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:02:02.410012919 +0000 UTC m=+212.835653591" watchObservedRunningTime="2025-12-03 14:02:04.66454863 +0000 UTC m=+215.090189292" Dec 03 14:02:04.921918 master-0 kubenswrapper[16176]: I1203 14:02:04.921634 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:05.404120 master-0 kubenswrapper[16176]: I1203 14:02:05.404013 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:05.404690 master-0 kubenswrapper[16176]: I1203 14:02:05.404141 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:05.603545 master-0 kubenswrapper[16176]: I1203 14:02:05.603396 16176 patch_prober.go:28] interesting pod/oauth-openshift-747bdb58b5-mn76f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.94:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:02:05.603545 master-0 kubenswrapper[16176]: I1203 14:02:05.603507 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.94:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:02:05.612788 master-0 kubenswrapper[16176]: I1203 14:02:05.612713 16176 scope.go:117] "RemoveContainer" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:02:05.613166 master-0 kubenswrapper[16176]: E1203 14:02:05.612934 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(7bce50c457ac1f4721bc81a570dd238a)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" Dec 03 14:02:05.930828 master-0 kubenswrapper[16176]: I1203 14:02:05.930744 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:02:06.022490 master-0 kubenswrapper[16176]: I1203 14:02:06.021974 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podStartSLOduration=61.021948901 podStartE2EDuration="1m1.021948901s" podCreationTimestamp="2025-12-03 14:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:02:06.019802649 +0000 UTC m=+216.445443331" watchObservedRunningTime="2025-12-03 14:02:06.021948901 +0000 UTC m=+216.447589573" Dec 03 14:02:06.261290 master-0 kubenswrapper[16176]: I1203 14:02:06.259010 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=62.182352934 podStartE2EDuration="1m16.258965376s" podCreationTimestamp="2025-12-03 14:00:50 +0000 UTC" firstStartedPulling="2025-12-03 14:01:19.153296966 +0000 UTC m=+169.578937658" lastFinishedPulling="2025-12-03 14:01:33.229909438 +0000 UTC m=+183.655550100" observedRunningTime="2025-12-03 14:02:06.214500642 +0000 UTC m=+216.640141324" watchObservedRunningTime="2025-12-03 14:02:06.258965376 +0000 UTC m=+216.684606058" Dec 03 14:02:06.280740 master-0 kubenswrapper[16176]: I1203 14:02:06.278284 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:02:06.350290 master-0 kubenswrapper[16176]: E1203 14:02:06.349301 16176 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Dec 03 14:02:06.350290 master-0 kubenswrapper[16176]: E1203 14:02:06.349440 16176 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:02:06.849411107 +0000 UTC m=+217.275051859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : configmap "prometheus-k8s-rulefiles-0" not found Dec 03 14:02:06.429079 master-0 kubenswrapper[16176]: I1203 14:02:06.428983 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:06.429353 master-0 kubenswrapper[16176]: I1203 14:02:06.429128 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:06.621775 master-0 kubenswrapper[16176]: I1203 14:02:06.621150 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"464d61112584f8d327c8a89a0106ed622430e5b4bbd85cdaef9caf5124f5db07"} Dec 03 14:02:06.674109 master-0 kubenswrapper[16176]: I1203 14:02:06.674007 16176 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4p4zh" podStartSLOduration=1.195806118 podStartE2EDuration="33.67397748s" podCreationTimestamp="2025-12-03 14:01:33 +0000 UTC" firstStartedPulling="2025-12-03 14:01:33.549803015 +0000 UTC m=+183.975443677" lastFinishedPulling="2025-12-03 14:02:06.027974377 +0000 UTC m=+216.453615039" observedRunningTime="2025-12-03 14:02:06.669724776 +0000 UTC m=+217.095365458" watchObservedRunningTime="2025-12-03 14:02:06.67397748 +0000 UTC m=+217.099618142" Dec 03 14:02:15.404691 master-0 kubenswrapper[16176]: I1203 14:02:15.404596 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:15.404691 master-0 kubenswrapper[16176]: I1203 14:02:15.404673 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:16.428209 master-0 kubenswrapper[16176]: I1203 14:02:16.428157 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:16.428800 master-0 kubenswrapper[16176]: I1203 14:02:16.428231 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:18.797725 master-0 kubenswrapper[16176]: I1203 14:02:18.797640 16176 scope.go:117] "RemoveContainer" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:02:19.726010 master-0 kubenswrapper[16176]: I1203 14:02:19.725951 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22"} Dec 03 14:02:23.246345 master-0 kubenswrapper[16176]: I1203 14:02:23.245229 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:23.250695 master-0 kubenswrapper[16176]: I1203 14:02:23.250648 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:23.754671 master-0 kubenswrapper[16176]: I1203 14:02:23.754603 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:25.403802 master-0 kubenswrapper[16176]: I1203 14:02:25.403731 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:25.404372 master-0 kubenswrapper[16176]: I1203 14:02:25.403817 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:26.428686 master-0 kubenswrapper[16176]: I1203 14:02:26.428566 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:26.428686 master-0 kubenswrapper[16176]: I1203 14:02:26.428647 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:33.157377 master-0 kubenswrapper[16176]: I1203 14:02:33.120405 16176 scope.go:117] "RemoveContainer" containerID="bff924e57f8b918e3a3ad84e8e605175cf5d1f94b5b29dc34e7f35b1adc45881" Dec 03 14:02:34.353628 master-0 kubenswrapper[16176]: I1203 14:02:34.353543 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:02:34.549984 master-0 kubenswrapper[16176]: I1203 14:02:34.549699 16176 scope.go:117] "RemoveContainer" containerID="4725755e8fcd48f231efa829d0b8caaa4b86286927a6c9554929c23c3560adbc" Dec 03 14:02:35.403581 master-0 kubenswrapper[16176]: I1203 14:02:35.403325 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:35.403581 master-0 kubenswrapper[16176]: I1203 14:02:35.403442 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:36.411699 master-0 kubenswrapper[16176]: I1203 14:02:36.411471 16176 scope.go:117] "RemoveContainer" containerID="12320acd67b84e2398e0ea7d64e0808c389cfb6c37276f22848b739eb71e3539" Dec 03 14:02:36.437626 master-0 kubenswrapper[16176]: I1203 14:02:36.437561 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:36.437848 master-0 kubenswrapper[16176]: I1203 14:02:36.437647 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:45.403818 master-0 kubenswrapper[16176]: I1203 14:02:45.403648 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:45.403818 master-0 kubenswrapper[16176]: I1203 14:02:45.403728 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:45.925056 master-0 kubenswrapper[16176]: I1203 14:02:45.924947 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:02:45.925640 master-0 kubenswrapper[16176]: I1203 14:02:45.925505 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" podUID="f1221807-c515-4995-a085-8fc98f62932f" containerName="route-controller-manager" containerID="cri-o://7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f" gracePeriod=30 Dec 03 14:02:45.935250 master-0 kubenswrapper[16176]: I1203 14:02:45.935180 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:02:45.953579 master-0 kubenswrapper[16176]: I1203 14:02:45.953394 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" containerName="controller-manager" containerID="cri-o://813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df" gracePeriod=30 Dec 03 14:02:46.428532 master-0 kubenswrapper[16176]: I1203 14:02:46.428454 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:46.429438 master-0 kubenswrapper[16176]: I1203 14:02:46.428572 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:49.162158 master-0 kubenswrapper[16176]: I1203 14:02:49.162083 16176 generic.go:334] "Generic (PLEG): container finished" podID="f1221807-c515-4995-a085-8fc98f62932f" containerID="7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f" exitCode=0 Dec 03 14:02:49.166941 master-0 kubenswrapper[16176]: I1203 14:02:49.162817 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" event={"ID":"f1221807-c515-4995-a085-8fc98f62932f","Type":"ContainerDied","Data":"7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f"} Dec 03 14:02:49.188838 master-0 kubenswrapper[16176]: I1203 14:02:49.188744 16176 generic.go:334] "Generic (PLEG): container finished" podID="12822200-5857-4e2a-96bf-31c2d917ae9e" containerID="813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df" exitCode=0 Dec 03 14:02:49.188838 master-0 kubenswrapper[16176]: I1203 14:02:49.188823 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" event={"ID":"12822200-5857-4e2a-96bf-31c2d917ae9e","Type":"ContainerDied","Data":"813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df"} Dec 03 14:02:50.214414 master-0 kubenswrapper[16176]: I1203 14:02:50.214081 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" event={"ID":"12822200-5857-4e2a-96bf-31c2d917ae9e","Type":"ContainerDied","Data":"9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8"} Dec 03 14:02:50.214414 master-0 kubenswrapper[16176]: I1203 14:02:50.214170 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d79e981c03775691177de08120983d8a20be21bc69e96726728743fda1f99e8" Dec 03 14:02:50.217614 master-0 kubenswrapper[16176]: I1203 14:02:50.217545 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" event={"ID":"f1221807-c515-4995-a085-8fc98f62932f","Type":"ContainerDied","Data":"8811e14090f8b0635c52b9294640b9ceb5ebd2e9753c457be1775020b2aa73ae"} Dec 03 14:02:50.217614 master-0 kubenswrapper[16176]: I1203 14:02:50.217616 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8811e14090f8b0635c52b9294640b9ceb5ebd2e9753c457be1775020b2aa73ae" Dec 03 14:02:51.227503 master-0 kubenswrapper[16176]: I1203 14:02:51.227447 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:02:51.235524 master-0 kubenswrapper[16176]: I1203 14:02:51.235445 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:02:51.277850 master-0 kubenswrapper[16176]: I1203 14:02:51.277754 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:02:51.327418 master-0 kubenswrapper[16176]: I1203 14:02:51.327235 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config\") pod \"12822200-5857-4e2a-96bf-31c2d917ae9e\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " Dec 03 14:02:51.327667 master-0 kubenswrapper[16176]: I1203 14:02:51.327584 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert\") pod \"f1221807-c515-4995-a085-8fc98f62932f\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " Dec 03 14:02:51.327667 master-0 kubenswrapper[16176]: I1203 14:02:51.327657 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cck9v\" (UniqueName: \"kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v\") pod \"f1221807-c515-4995-a085-8fc98f62932f\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " Dec 03 14:02:51.327780 master-0 kubenswrapper[16176]: I1203 14:02:51.327762 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert\") pod \"12822200-5857-4e2a-96bf-31c2d917ae9e\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " Dec 03 14:02:51.327832 master-0 kubenswrapper[16176]: I1203 14:02:51.327806 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca\") pod \"12822200-5857-4e2a-96bf-31c2d917ae9e\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " Dec 03 14:02:51.327888 master-0 kubenswrapper[16176]: I1203 14:02:51.327840 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config\") pod \"f1221807-c515-4995-a085-8fc98f62932f\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " Dec 03 14:02:51.327888 master-0 kubenswrapper[16176]: I1203 14:02:51.327866 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles\") pod \"12822200-5857-4e2a-96bf-31c2d917ae9e\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " Dec 03 14:02:51.327982 master-0 kubenswrapper[16176]: I1203 14:02:51.327927 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvrwh\" (UniqueName: \"kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh\") pod \"12822200-5857-4e2a-96bf-31c2d917ae9e\" (UID: \"12822200-5857-4e2a-96bf-31c2d917ae9e\") " Dec 03 14:02:51.328032 master-0 kubenswrapper[16176]: I1203 14:02:51.328011 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca\") pod \"f1221807-c515-4995-a085-8fc98f62932f\" (UID: \"f1221807-c515-4995-a085-8fc98f62932f\") " Dec 03 14:02:51.328394 master-0 kubenswrapper[16176]: I1203 14:02:51.328051 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config" (OuterVolumeSpecName: "config") pod "12822200-5857-4e2a-96bf-31c2d917ae9e" (UID: "12822200-5857-4e2a-96bf-31c2d917ae9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:02:51.328493 master-0 kubenswrapper[16176]: I1203 14:02:51.328447 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "12822200-5857-4e2a-96bf-31c2d917ae9e" (UID: "12822200-5857-4e2a-96bf-31c2d917ae9e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:02:51.328493 master-0 kubenswrapper[16176]: I1203 14:02:51.328461 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca" (OuterVolumeSpecName: "client-ca") pod "12822200-5857-4e2a-96bf-31c2d917ae9e" (UID: "12822200-5857-4e2a-96bf-31c2d917ae9e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:02:51.328493 master-0 kubenswrapper[16176]: I1203 14:02:51.328459 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config" (OuterVolumeSpecName: "config") pod "f1221807-c515-4995-a085-8fc98f62932f" (UID: "f1221807-c515-4995-a085-8fc98f62932f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:02:51.329190 master-0 kubenswrapper[16176]: I1203 14:02:51.329122 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca" (OuterVolumeSpecName: "client-ca") pod "f1221807-c515-4995-a085-8fc98f62932f" (UID: "f1221807-c515-4995-a085-8fc98f62932f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:02:51.329823 master-0 kubenswrapper[16176]: I1203 14:02:51.329734 16176 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.329934 master-0 kubenswrapper[16176]: I1203 14:02:51.329810 16176 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.329934 master-0 kubenswrapper[16176]: I1203 14:02:51.329873 16176 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12822200-5857-4e2a-96bf-31c2d917ae9e-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.329934 master-0 kubenswrapper[16176]: I1203 14:02:51.329930 16176 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.330140 master-0 kubenswrapper[16176]: I1203 14:02:51.329954 16176 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f1221807-c515-4995-a085-8fc98f62932f-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.331689 master-0 kubenswrapper[16176]: I1203 14:02:51.331642 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v" (OuterVolumeSpecName: "kube-api-access-cck9v") pod "f1221807-c515-4995-a085-8fc98f62932f" (UID: "f1221807-c515-4995-a085-8fc98f62932f"). InnerVolumeSpecName "kube-api-access-cck9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:02:51.332003 master-0 kubenswrapper[16176]: I1203 14:02:51.331962 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "12822200-5857-4e2a-96bf-31c2d917ae9e" (UID: "12822200-5857-4e2a-96bf-31c2d917ae9e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:02:51.332112 master-0 kubenswrapper[16176]: I1203 14:02:51.332059 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh" (OuterVolumeSpecName: "kube-api-access-tvrwh") pod "12822200-5857-4e2a-96bf-31c2d917ae9e" (UID: "12822200-5857-4e2a-96bf-31c2d917ae9e"). InnerVolumeSpecName "kube-api-access-tvrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:02:51.333731 master-0 kubenswrapper[16176]: I1203 14:02:51.333688 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:02:51.334003 master-0 kubenswrapper[16176]: I1203 14:02:51.333974 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f1221807-c515-4995-a085-8fc98f62932f" (UID: "f1221807-c515-4995-a085-8fc98f62932f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:02:51.431528 master-0 kubenswrapper[16176]: I1203 14:02:51.431496 16176 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1221807-c515-4995-a085-8fc98f62932f-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.431651 master-0 kubenswrapper[16176]: I1203 14:02:51.431533 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cck9v\" (UniqueName: \"kubernetes.io/projected/f1221807-c515-4995-a085-8fc98f62932f-kube-api-access-cck9v\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.431651 master-0 kubenswrapper[16176]: I1203 14:02:51.431544 16176 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12822200-5857-4e2a-96bf-31c2d917ae9e-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.431651 master-0 kubenswrapper[16176]: I1203 14:02:51.431553 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvrwh\" (UniqueName: \"kubernetes.io/projected/12822200-5857-4e2a-96bf-31c2d917ae9e-kube-api-access-tvrwh\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:51.549300 master-0 kubenswrapper[16176]: I1203 14:02:51.549204 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:02:51.549778 master-0 kubenswrapper[16176]: E1203 14:02:51.549717 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" containerName="controller-manager" Dec 03 14:02:51.549778 master-0 kubenswrapper[16176]: I1203 14:02:51.549783 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" containerName="controller-manager" Dec 03 14:02:51.549911 master-0 kubenswrapper[16176]: E1203 14:02:51.549847 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1221807-c515-4995-a085-8fc98f62932f" containerName="route-controller-manager" Dec 03 14:02:51.549911 master-0 kubenswrapper[16176]: I1203 14:02:51.549860 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1221807-c515-4995-a085-8fc98f62932f" containerName="route-controller-manager" Dec 03 14:02:51.553282 master-0 kubenswrapper[16176]: I1203 14:02:51.550082 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1221807-c515-4995-a085-8fc98f62932f" containerName="route-controller-manager" Dec 03 14:02:51.553282 master-0 kubenswrapper[16176]: I1203 14:02:51.550147 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" containerName="controller-manager" Dec 03 14:02:51.553282 master-0 kubenswrapper[16176]: I1203 14:02:51.550832 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.560539 master-0 kubenswrapper[16176]: I1203 14:02:51.560463 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:02:51.643353 master-0 kubenswrapper[16176]: I1203 14:02:51.639896 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.643353 master-0 kubenswrapper[16176]: I1203 14:02:51.639951 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.643353 master-0 kubenswrapper[16176]: I1203 14:02:51.639995 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.643353 master-0 kubenswrapper[16176]: I1203 14:02:51.640076 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.643353 master-0 kubenswrapper[16176]: I1203 14:02:51.640121 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.742367 master-0 kubenswrapper[16176]: I1203 14:02:51.742169 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.742367 master-0 kubenswrapper[16176]: I1203 14:02:51.742284 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.742367 master-0 kubenswrapper[16176]: I1203 14:02:51.742319 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.742367 master-0 kubenswrapper[16176]: I1203 14:02:51.742366 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.742748 master-0 kubenswrapper[16176]: I1203 14:02:51.742464 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.745246 master-0 kubenswrapper[16176]: I1203 14:02:51.745191 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.745402 master-0 kubenswrapper[16176]: I1203 14:02:51.745328 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.746976 master-0 kubenswrapper[16176]: I1203 14:02:51.746894 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.749386 master-0 kubenswrapper[16176]: I1203 14:02:51.749305 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:51.761018 master-0 kubenswrapper[16176]: I1203 14:02:51.760890 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:52.233781 master-0 kubenswrapper[16176]: I1203 14:02:52.233679 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx" Dec 03 14:02:52.233781 master-0 kubenswrapper[16176]: I1203 14:02:52.233734 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:02:52.265115 master-0 kubenswrapper[16176]: I1203 14:02:52.265038 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:02:54.353978 master-0 kubenswrapper[16176]: I1203 14:02:54.353892 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:02:54.355125 master-0 kubenswrapper[16176]: I1203 14:02:54.355080 16176 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:02:54.355303 master-0 kubenswrapper[16176]: I1203 14:02:54.355227 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.356630 master-0 kubenswrapper[16176]: I1203 14:02:54.356565 16176 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:02:54.356920 master-0 kubenswrapper[16176]: I1203 14:02:54.356657 16176 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:02:54.357029 master-0 kubenswrapper[16176]: I1203 14:02:54.356966 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver" containerID="cri-o://a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da" gracePeriod=15 Dec 03 14:02:54.357150 master-0 kubenswrapper[16176]: I1203 14:02:54.357105 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.357229 master-0 kubenswrapper[16176]: I1203 14:02:54.357137 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e666b3a3d526b049a61d7c6d5b53f263418641d81cb64ea04c25d2f6f4646153" gracePeriod=15 Dec 03 14:02:54.357296 master-0 kubenswrapper[16176]: I1203 14:02:54.357250 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://afce319e99c6717d54fcac45d05cfe13edf74be9e988bfb6ced34d2e5a05b5e8" gracePeriod=15 Dec 03 14:02:54.357364 master-0 kubenswrapper[16176]: I1203 14:02:54.357339 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7" gracePeriod=15 Dec 03 14:02:54.357407 master-0 kubenswrapper[16176]: I1203 14:02:54.357392 16176 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-syncer" containerID="cri-o://88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc" gracePeriod=15 Dec 03 14:02:54.357555 master-0 kubenswrapper[16176]: E1203 14:02:54.357524 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.357555 master-0 kubenswrapper[16176]: I1203 14:02:54.357550 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: E1203 14:02:54.357568 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: I1203 14:02:54.357577 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: E1203 14:02:54.357594 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-insecure-readyz" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: I1203 14:02:54.357602 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-insecure-readyz" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: E1203 14:02:54.357613 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="setup" Dec 03 14:02:54.357633 master-0 kubenswrapper[16176]: I1203 14:02:54.357621 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="setup" Dec 03 14:02:54.357871 master-0 kubenswrapper[16176]: E1203 14:02:54.357639 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-syncer" Dec 03 14:02:54.357871 master-0 kubenswrapper[16176]: I1203 14:02:54.357648 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-syncer" Dec 03 14:02:54.359010 master-0 kubenswrapper[16176]: E1203 14:02:54.358949 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:02:54.359010 master-0 kubenswrapper[16176]: I1203 14:02:54.358977 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:02:54.359107 master-0 kubenswrapper[16176]: E1203 14:02:54.359010 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver" Dec 03 14:02:54.359107 master-0 kubenswrapper[16176]: I1203 14:02:54.359022 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver" Dec 03 14:02:54.359183 master-0 kubenswrapper[16176]: I1203 14:02:54.359136 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:02:54.359291 master-0 kubenswrapper[16176]: I1203 14:02:54.359247 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:02:54.359354 master-0 kubenswrapper[16176]: I1203 14:02:54.359303 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:02:54.359481 master-0 kubenswrapper[16176]: I1203 14:02:54.359393 16176 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:02:54.359569 master-0 kubenswrapper[16176]: I1203 14:02:54.359530 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-insecure-readyz" Dec 03 14:02:54.359569 master-0 kubenswrapper[16176]: I1203 14:02:54.359557 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.359652 master-0 kubenswrapper[16176]: I1203 14:02:54.359584 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver" Dec 03 14:02:54.359652 master-0 kubenswrapper[16176]: I1203 14:02:54.359600 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-syncer" Dec 03 14:02:54.359652 master-0 kubenswrapper[16176]: I1203 14:02:54.359614 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.359652 master-0 kubenswrapper[16176]: I1203 14:02:54.359624 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:02:54.359831 master-0 kubenswrapper[16176]: E1203 14:02:54.359808 16176 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.359871 master-0 kubenswrapper[16176]: I1203 14:02:54.359833 16176 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.360051 master-0 kubenswrapper[16176]: I1203 14:02:54.360020 16176 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver-check-endpoints" Dec 03 14:02:54.360491 master-0 kubenswrapper[16176]: I1203 14:02:54.360451 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:02:54.360667 master-0 kubenswrapper[16176]: I1203 14:02:54.360638 16176 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:02:54.504921 master-0 kubenswrapper[16176]: I1203 14:02:54.504744 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.504921 master-0 kubenswrapper[16176]: I1203 14:02:54.504870 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.504921 master-0 kubenswrapper[16176]: I1203 14:02:54.504898 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.504999 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.505028 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.505055 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.505230 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.505395 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.505570 master-0 kubenswrapper[16176]: I1203 14:02:54.505448 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.505826 master-0 kubenswrapper[16176]: I1203 14:02:54.505580 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.505826 master-0 kubenswrapper[16176]: I1203 14:02:54.505689 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.505826 master-0 kubenswrapper[16176]: I1203 14:02:54.505735 16176 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.606683 master-0 kubenswrapper[16176]: I1203 14:02:54.606518 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.606683 master-0 kubenswrapper[16176]: I1203 14:02:54.606587 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.606683 master-0 kubenswrapper[16176]: I1203 14:02:54.606628 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.606683 master-0 kubenswrapper[16176]: I1203 14:02:54.606677 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606684 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606731 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606702 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606766 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606800 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606829 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606853 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606889 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606912 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606939 16176 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.607079 master-0 kubenswrapper[16176]: I1203 14:02:54.606999 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607585 master-0 kubenswrapper[16176]: I1203 14:02:54.607174 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607585 master-0 kubenswrapper[16176]: I1203 14:02:54.607216 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607585 master-0 kubenswrapper[16176]: I1203 14:02:54.607246 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:54.607585 master-0 kubenswrapper[16176]: I1203 14:02:54.607298 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.607585 master-0 kubenswrapper[16176]: I1203 14:02:54.607329 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:02:54.608421 master-0 kubenswrapper[16176]: I1203 14:02:54.608380 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.609979 master-0 kubenswrapper[16176]: I1203 14:02:54.609504 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.610920 master-0 kubenswrapper[16176]: I1203 14:02:54.610863 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:54.804836 master-0 kubenswrapper[16176]: I1203 14:02:54.798223 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:02:55.148769 master-0 kubenswrapper[16176]: I1203 14:02:55.148713 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:55.258062 master-0 kubenswrapper[16176]: I1203 14:02:55.258008 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-check-endpoints/1.log" Dec 03 14:02:55.259230 master-0 kubenswrapper[16176]: I1203 14:02:55.259193 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:02:55.260052 master-0 kubenswrapper[16176]: I1203 14:02:55.260011 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="e666b3a3d526b049a61d7c6d5b53f263418641d81cb64ea04c25d2f6f4646153" exitCode=0 Dec 03 14:02:55.260052 master-0 kubenswrapper[16176]: I1203 14:02:55.260036 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="afce319e99c6717d54fcac45d05cfe13edf74be9e988bfb6ced34d2e5a05b5e8" exitCode=0 Dec 03 14:02:55.260052 master-0 kubenswrapper[16176]: I1203 14:02:55.260047 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc" exitCode=2 Dec 03 14:02:55.260235 master-0 kubenswrapper[16176]: I1203 14:02:55.260096 16176 scope.go:117] "RemoveContainer" containerID="9f50eb15ca499ab21dfb5f2f5b9bc225ce05f0f2ff2359567137d0dbccfe595e" Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: I1203 14:02:55.339927 16176 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]log ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]api-openshift-apiserver-available ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]api-openshift-oauth-apiserver-available ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]informer-sync ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/priority-and-fairness-filter ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-apiextensions-informers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-apiextensions-controllers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/crd-informer-synced ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-system-namespaces-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/rbac/bootstrap-roles ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/bootstrap-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/start-kube-aggregator-informers ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-registration-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-discovery-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]autoregister-completion ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-openapi-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: [-]shutdown failed: reason withheld Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: readyz check failed Dec 03 14:02:55.345710 master-0 kubenswrapper[16176]: I1203 14:02:55.340007 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="69e3deb6aaa7ca82dd236253a197e02b" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:02:55.404020 master-0 kubenswrapper[16176]: I1203 14:02:55.403793 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:02:55.404020 master-0 kubenswrapper[16176]: I1203 14:02:55.403860 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:02:56.275597 master-0 kubenswrapper[16176]: I1203 14:02:56.275418 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:02:56.277590 master-0 kubenswrapper[16176]: I1203 14:02:56.277472 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7" exitCode=0 Dec 03 14:02:56.280395 master-0 kubenswrapper[16176]: I1203 14:02:56.280340 16176 generic.go:334] "Generic (PLEG): container finished" podID="0b1e0884-ff54-419b-90d3-25f561a6391d" containerID="5944601d984a89efbdcd280a11d9fec3279923f00b3d3e74a67095fff7358739" exitCode=0 Dec 03 14:02:56.280520 master-0 kubenswrapper[16176]: I1203 14:02:56.280398 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0b1e0884-ff54-419b-90d3-25f561a6391d","Type":"ContainerDied","Data":"5944601d984a89efbdcd280a11d9fec3279923f00b3d3e74a67095fff7358739"} Dec 03 14:02:56.429298 master-0 kubenswrapper[16176]: I1203 14:02:56.429192 16176 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:02:56.430250 master-0 kubenswrapper[16176]: I1203 14:02:56.429313 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:02:56.586448 master-0 kubenswrapper[16176]: I1203 14:02:56.586349 16176 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:02:56.592594 master-0 kubenswrapper[16176]: I1203 14:02:56.592523 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:02:56.611382 master-0 kubenswrapper[16176]: I1203 14:02:56.611294 16176 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:56.651220 master-0 kubenswrapper[16176]: I1203 14:02:56.651128 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:02:56.652999 master-0 kubenswrapper[16176]: I1203 14:02:56.652939 16176 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:02:57.291436 master-0 kubenswrapper[16176]: I1203 14:02:57.291235 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"c33db12eefc9674554f56bcb1cf6ffb76f254ea52981a51653253a9e93cac907"} Dec 03 14:02:57.292865 master-0 kubenswrapper[16176]: I1203 14:02:57.292815 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerStarted","Data":"2ad325510cfa7ec2ed916035e1759ebab1f4b5ba10d6518b5654d8b45cc915fd"} Dec 03 14:02:57.557630 master-0 kubenswrapper[16176]: I1203 14:02:57.557531 16176 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:02:57.559399 master-0 kubenswrapper[16176]: I1203 14:02:57.559350 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:02:57.694574 master-0 kubenswrapper[16176]: I1203 14:02:57.694476 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694699 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694730 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694870 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694909 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock" (OuterVolumeSpecName: "var-lock") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694968 16176 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:57.694990 master-0 kubenswrapper[16176]: I1203 14:02:57.694981 16176 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:57.698720 master-0 kubenswrapper[16176]: I1203 14:02:57.698607 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:02:57.799034 master-0 kubenswrapper[16176]: I1203 14:02:57.798950 16176 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:02:58.272395 master-0 kubenswrapper[16176]: I1203 14:02:58.269195 16176 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84f75d5446-j8tkx"] Dec 03 14:02:58.300887 master-0 kubenswrapper[16176]: I1203 14:02:58.300804 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac"} Dec 03 14:02:58.302695 master-0 kubenswrapper[16176]: I1203 14:02:58.302662 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerStarted","Data":"1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed"} Dec 03 14:02:58.305498 master-0 kubenswrapper[16176]: I1203 14:02:58.305467 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:58.308343 master-0 kubenswrapper[16176]: I1203 14:02:58.308307 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"0b1e0884-ff54-419b-90d3-25f561a6391d","Type":"ContainerDied","Data":"528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b"} Dec 03 14:02:58.308343 master-0 kubenswrapper[16176]: I1203 14:02:58.308338 16176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="528fe3575a38dfa47b606cda5d76d610bfdd22dde16f138f6dd4d9018b83ed2b" Dec 03 14:02:58.308527 master-0 kubenswrapper[16176]: I1203 14:02:58.308417 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:02:58.309975 master-0 kubenswrapper[16176]: I1203 14:02:58.309495 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:02:59.803082 master-0 kubenswrapper[16176]: I1203 14:02:59.803004 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1221807-c515-4995-a085-8fc98f62932f" path="/var/lib/kubelet/pods/f1221807-c515-4995-a085-8fc98f62932f/volumes" Dec 03 14:03:00.843416 master-0 kubenswrapper[16176]: I1203 14:03:00.842102 16176 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:03:01.543318 master-0 kubenswrapper[16176]: I1203 14:03:01.543239 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78d987764b-xcs5w_d3200abb-a440-44db-8897-79c809c1d838/controller-manager/0.log" Dec 03 14:03:01.543318 master-0 kubenswrapper[16176]: I1203 14:03:01.543315 16176 generic.go:334] "Generic (PLEG): container finished" podID="d3200abb-a440-44db-8897-79c809c1d838" containerID="1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed" exitCode=255 Dec 03 14:03:01.544156 master-0 kubenswrapper[16176]: I1203 14:03:01.543429 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerDied","Data":"1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed"} Dec 03 14:03:01.544279 master-0 kubenswrapper[16176]: I1203 14:03:01.544232 16176 scope.go:117] "RemoveContainer" containerID="1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed" Dec 03 14:03:01.546514 master-0 kubenswrapper[16176]: I1203 14:03:01.546483 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerStarted","Data":"aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b"} Dec 03 14:03:01.546601 master-0 kubenswrapper[16176]: I1203 14:03:01.546527 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerStarted","Data":"35b21820e2867da766849646ff8ef961857ad219466cb8b19f56f804e3dbfe04"} Dec 03 14:03:01.546941 master-0 kubenswrapper[16176]: I1203 14:03:01.546894 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:03:02.547078 master-0 kubenswrapper[16176]: I1203 14:03:02.546947 16176 patch_prober.go:28] interesting pod/route-controller-manager-678c7f799b-4b7nv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.96:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:03:02.547845 master-0 kubenswrapper[16176]: I1203 14:03:02.547695 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.96:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:03:02.558924 master-0 kubenswrapper[16176]: I1203 14:03:02.558867 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78d987764b-xcs5w_d3200abb-a440-44db-8897-79c809c1d838/controller-manager/0.log" Dec 03 14:03:02.559392 master-0 kubenswrapper[16176]: I1203 14:03:02.559326 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerStarted","Data":"c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7"} Dec 03 14:03:02.559996 master-0 kubenswrapper[16176]: I1203 14:03:02.559953 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:03:02.563249 master-0 kubenswrapper[16176]: I1203 14:03:02.563173 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:03:03.561912 master-0 kubenswrapper[16176]: I1203 14:03:03.561828 16176 patch_prober.go:28] interesting pod/route-controller-manager-678c7f799b-4b7nv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.96:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:03:03.562654 master-0 kubenswrapper[16176]: I1203 14:03:03.561973 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.96:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:03:03.587745 master-0 kubenswrapper[16176]: I1203 14:03:03.587679 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:03:03.588647 master-0 kubenswrapper[16176]: I1203 14:03:03.588602 16176 generic.go:334] "Generic (PLEG): container finished" podID="69e3deb6aaa7ca82dd236253a197e02b" containerID="a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da" exitCode=0 Dec 03 14:03:03.994192 master-0 kubenswrapper[16176]: I1203 14:03:03.994102 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:03:03.995087 master-0 kubenswrapper[16176]: I1203 14:03:03.995028 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:03:04.102892 master-0 kubenswrapper[16176]: I1203 14:03:04.102829 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") pod \"69e3deb6aaa7ca82dd236253a197e02b\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " Dec 03 14:03:04.103148 master-0 kubenswrapper[16176]: I1203 14:03:04.103108 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") pod \"69e3deb6aaa7ca82dd236253a197e02b\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " Dec 03 14:03:04.103148 master-0 kubenswrapper[16176]: I1203 14:03:04.103098 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "69e3deb6aaa7ca82dd236253a197e02b" (UID: "69e3deb6aaa7ca82dd236253a197e02b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:03:04.103291 master-0 kubenswrapper[16176]: I1203 14:03:04.103152 16176 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") pod \"69e3deb6aaa7ca82dd236253a197e02b\" (UID: \"69e3deb6aaa7ca82dd236253a197e02b\") " Dec 03 14:03:04.103291 master-0 kubenswrapper[16176]: I1203 14:03:04.103168 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "69e3deb6aaa7ca82dd236253a197e02b" (UID: "69e3deb6aaa7ca82dd236253a197e02b"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:03:04.103291 master-0 kubenswrapper[16176]: I1203 14:03:04.103278 16176 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "69e3deb6aaa7ca82dd236253a197e02b" (UID: "69e3deb6aaa7ca82dd236253a197e02b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:03:04.104030 master-0 kubenswrapper[16176]: I1203 14:03:04.103979 16176 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:03:04.104030 master-0 kubenswrapper[16176]: I1203 14:03:04.104021 16176 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-audit-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:03:04.104030 master-0 kubenswrapper[16176]: I1203 14:03:04.104034 16176 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/69e3deb6aaa7ca82dd236253a197e02b-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:03:04.604502 master-0 kubenswrapper[16176]: I1203 14:03:04.604028 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:03:04.605462 master-0 kubenswrapper[16176]: I1203 14:03:04.605410 16176 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:03:04.605634 master-0 kubenswrapper[16176]: I1203 14:03:04.605599 16176 scope.go:117] "RemoveContainer" containerID="e666b3a3d526b049a61d7c6d5b53f263418641d81cb64ea04c25d2f6f4646153" Dec 03 14:03:04.874193 master-0 kubenswrapper[16176]: I1203 14:03:04.874122 16176 scope.go:117] "RemoveContainer" containerID="afce319e99c6717d54fcac45d05cfe13edf74be9e988bfb6ced34d2e5a05b5e8" Dec 03 14:03:04.922200 master-0 kubenswrapper[16176]: I1203 14:03:04.922019 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 14:03:05.017817 master-0 kubenswrapper[16176]: I1203 14:03:05.008376 16176 patch_prober.go:28] interesting pod/thanos-querier-cc996c4bd-j4hzr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="HTTP probe failed with statuscode: 502" start-of-body= Dec 03 14:03:05.017817 master-0 kubenswrapper[16176]: I1203 14:03:05.008442 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerName="kube-rbac-proxy-web" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 03 14:03:05.067790 master-0 kubenswrapper[16176]: I1203 14:03:05.050783 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Dec 03 14:03:05.151423 master-0 kubenswrapper[16176]: I1203 14:03:05.151357 16176 patch_prober.go:28] interesting pod/controller-manager-78d987764b-xcs5w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.95:8443/healthz\": dial tcp 10.128.0.95:8443: connect: connection refused" start-of-body= Dec 03 14:03:05.151556 master-0 kubenswrapper[16176]: I1203 14:03:05.151441 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.95:8443/healthz\": dial tcp 10.128.0.95:8443: connect: connection refused" Dec 03 14:03:05.403056 master-0 kubenswrapper[16176]: I1203 14:03:05.403008 16176 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:03:05.403150 master-0 kubenswrapper[16176]: I1203 14:03:05.403071 16176 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:03:05.615191 master-0 kubenswrapper[16176]: I1203 14:03:05.615109 16176 generic.go:334] "Generic (PLEG): container finished" podID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" containerID="82bf6f53b6a48ad08a379c0ecf47b28f74fed2b4944e445cff567b57072a04d7" exitCode=0 Dec 03 14:03:05.615823 master-0 kubenswrapper[16176]: I1203 14:03:05.615199 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerDied","Data":"82bf6f53b6a48ad08a379c0ecf47b28f74fed2b4944e445cff567b57072a04d7"} Dec 03 14:03:05.616026 master-0 kubenswrapper[16176]: I1203 14:03:05.615987 16176 scope.go:117] "RemoveContainer" containerID="82bf6f53b6a48ad08a379c0ecf47b28f74fed2b4944e445cff567b57072a04d7" Dec 03 14:03:05.617715 master-0 kubenswrapper[16176]: I1203 14:03:05.617676 16176 generic.go:334] "Generic (PLEG): container finished" podID="98392f8e-0285-4bc3-95a9-d29033639ca3" containerID="918c84c43b8f6701690ab8b0d2bb05915c1232978c8cc88ed437a5bf950e3857" exitCode=0 Dec 03 14:03:05.617715 master-0 kubenswrapper[16176]: I1203 14:03:05.617706 16176 generic.go:334] "Generic (PLEG): container finished" podID="98392f8e-0285-4bc3-95a9-d29033639ca3" containerID="28c67c1281b6fbfb0446ab66907176e4746bcaec0c3dd47723e1e5adbfcd3d35" exitCode=0 Dec 03 14:03:05.617844 master-0 kubenswrapper[16176]: I1203 14:03:05.617751 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerDied","Data":"918c84c43b8f6701690ab8b0d2bb05915c1232978c8cc88ed437a5bf950e3857"} Dec 03 14:03:05.617844 master-0 kubenswrapper[16176]: I1203 14:03:05.617818 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerDied","Data":"28c67c1281b6fbfb0446ab66907176e4746bcaec0c3dd47723e1e5adbfcd3d35"} Dec 03 14:03:05.618713 master-0 kubenswrapper[16176]: I1203 14:03:05.618652 16176 scope.go:117] "RemoveContainer" containerID="28c67c1281b6fbfb0446ab66907176e4746bcaec0c3dd47723e1e5adbfcd3d35" Dec 03 14:03:05.618765 master-0 kubenswrapper[16176]: I1203 14:03:05.618715 16176 scope.go:117] "RemoveContainer" containerID="918c84c43b8f6701690ab8b0d2bb05915c1232978c8cc88ed437a5bf950e3857" Dec 03 14:03:05.620808 master-0 kubenswrapper[16176]: I1203 14:03:05.620759 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/0.log" Dec 03 14:03:05.621799 master-0 kubenswrapper[16176]: I1203 14:03:05.621765 16176 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2" exitCode=0 Dec 03 14:03:05.621799 master-0 kubenswrapper[16176]: I1203 14:03:05.621786 16176 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f" exitCode=2 Dec 03 14:03:05.621799 master-0 kubenswrapper[16176]: I1203 14:03:05.621797 16176 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316" exitCode=0 Dec 03 14:03:05.621950 master-0 kubenswrapper[16176]: I1203 14:03:05.621837 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2"} Dec 03 14:03:05.621950 master-0 kubenswrapper[16176]: I1203 14:03:05.621890 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f"} Dec 03 14:03:05.621950 master-0 kubenswrapper[16176]: I1203 14:03:05.621913 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316"} Dec 03 14:03:05.622291 master-0 kubenswrapper[16176]: I1203 14:03:05.622251 16176 scope.go:117] "RemoveContainer" containerID="4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316" Dec 03 14:03:05.622363 master-0 kubenswrapper[16176]: I1203 14:03:05.622295 16176 scope.go:117] "RemoveContainer" containerID="767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f" Dec 03 14:03:05.622363 master-0 kubenswrapper[16176]: I1203 14:03:05.622306 16176 scope.go:117] "RemoveContainer" containerID="dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2" Dec 03 14:03:05.624060 master-0 kubenswrapper[16176]: I1203 14:03:05.624016 16176 generic.go:334] "Generic (PLEG): container finished" podID="69b752ed-691c-4574-a01e-428d4bf85b75" containerID="40d8eb82545e74f2af7ecc933f815e993ab5d8906f06f08f520dc7dcf35e0ae7" exitCode=0 Dec 03 14:03:05.624060 master-0 kubenswrapper[16176]: I1203 14:03:05.624045 16176 generic.go:334] "Generic (PLEG): container finished" podID="69b752ed-691c-4574-a01e-428d4bf85b75" containerID="5541dbd90d88c32e2769944667fd4132f8eacd3305e658da6b80b11593e7e91f" exitCode=0 Dec 03 14:03:05.624189 master-0 kubenswrapper[16176]: I1203 14:03:05.624086 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerDied","Data":"40d8eb82545e74f2af7ecc933f815e993ab5d8906f06f08f520dc7dcf35e0ae7"} Dec 03 14:03:05.624189 master-0 kubenswrapper[16176]: I1203 14:03:05.624130 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerDied","Data":"5541dbd90d88c32e2769944667fd4132f8eacd3305e658da6b80b11593e7e91f"} Dec 03 14:03:05.624617 master-0 kubenswrapper[16176]: I1203 14:03:05.624583 16176 scope.go:117] "RemoveContainer" containerID="5541dbd90d88c32e2769944667fd4132f8eacd3305e658da6b80b11593e7e91f" Dec 03 14:03:05.624617 master-0 kubenswrapper[16176]: I1203 14:03:05.624605 16176 scope.go:117] "RemoveContainer" containerID="40d8eb82545e74f2af7ecc933f815e993ab5d8906f06f08f520dc7dcf35e0ae7" Dec 03 14:03:05.625840 master-0 kubenswrapper[16176]: I1203 14:03:05.625804 16176 generic.go:334] "Generic (PLEG): container finished" podID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" containerID="e428ce79dd4eb6b57bb33b528a8e437f92a3e0d91130b0f0524a4aced7793334" exitCode=0 Dec 03 14:03:05.625902 master-0 kubenswrapper[16176]: I1203 14:03:05.625841 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerDied","Data":"e428ce79dd4eb6b57bb33b528a8e437f92a3e0d91130b0f0524a4aced7793334"} Dec 03 14:03:05.626665 master-0 kubenswrapper[16176]: I1203 14:03:05.626616 16176 scope.go:117] "RemoveContainer" containerID="e428ce79dd4eb6b57bb33b528a8e437f92a3e0d91130b0f0524a4aced7793334" Dec 03 14:03:05.628000 master-0 kubenswrapper[16176]: I1203 14:03:05.627966 16176 generic.go:334] "Generic (PLEG): container finished" podID="b340553b-d483-4839-8328-518f27770832" containerID="340d747ec16780778d80dba371a56e575bd9f6634b60bc266323b1291eb8cdba" exitCode=0 Dec 03 14:03:05.628000 master-0 kubenswrapper[16176]: I1203 14:03:05.627984 16176 generic.go:334] "Generic (PLEG): container finished" podID="b340553b-d483-4839-8328-518f27770832" containerID="2d9fa9ab9f7e699411978500abac33a5ab419e6ce3c4e1ef13a7973cd07019af" exitCode=0 Dec 03 14:03:05.628111 master-0 kubenswrapper[16176]: I1203 14:03:05.628026 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerDied","Data":"340d747ec16780778d80dba371a56e575bd9f6634b60bc266323b1291eb8cdba"} Dec 03 14:03:05.628111 master-0 kubenswrapper[16176]: I1203 14:03:05.628051 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerDied","Data":"2d9fa9ab9f7e699411978500abac33a5ab419e6ce3c4e1ef13a7973cd07019af"} Dec 03 14:03:05.628399 master-0 kubenswrapper[16176]: I1203 14:03:05.628374 16176 scope.go:117] "RemoveContainer" containerID="2d9fa9ab9f7e699411978500abac33a5ab419e6ce3c4e1ef13a7973cd07019af" Dec 03 14:03:05.628399 master-0 kubenswrapper[16176]: I1203 14:03:05.628394 16176 scope.go:117] "RemoveContainer" containerID="340d747ec16780778d80dba371a56e575bd9f6634b60bc266323b1291eb8cdba" Dec 03 14:03:05.630603 master-0 kubenswrapper[16176]: I1203 14:03:05.630569 16176 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="5cd949ac5ee2d4c762f3178ef1d027c253025d3b527c8391e3b7c924cb4b23dd" exitCode=0 Dec 03 14:03:05.630674 master-0 kubenswrapper[16176]: I1203 14:03:05.630628 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"5cd949ac5ee2d4c762f3178ef1d027c253025d3b527c8391e3b7c924cb4b23dd"} Dec 03 14:03:05.630972 master-0 kubenswrapper[16176]: I1203 14:03:05.630942 16176 scope.go:117] "RemoveContainer" containerID="5cd949ac5ee2d4c762f3178ef1d027c253025d3b527c8391e3b7c924cb4b23dd" Dec 03 14:03:05.633329 master-0 kubenswrapper[16176]: I1203 14:03:05.633302 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-b62gf_b71ac8a5-987d-4eba-8bc0-a091f0a0de16/node-exporter/0.log" Dec 03 14:03:05.634162 master-0 kubenswrapper[16176]: I1203 14:03:05.634129 16176 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="2d667c8e5534b2ba9a4b068bb037fee023ee61772c5e8bb0ae3dfb586f8f8cf6" exitCode=0 Dec 03 14:03:05.634162 master-0 kubenswrapper[16176]: I1203 14:03:05.634153 16176 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="df7afb54b47f612999216bf1266d421e3dfe58a44fe13dfaccad23838e1be411" exitCode=143 Dec 03 14:03:05.634314 master-0 kubenswrapper[16176]: I1203 14:03:05.634199 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"2d667c8e5534b2ba9a4b068bb037fee023ee61772c5e8bb0ae3dfb586f8f8cf6"} Dec 03 14:03:05.634314 master-0 kubenswrapper[16176]: I1203 14:03:05.634228 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"df7afb54b47f612999216bf1266d421e3dfe58a44fe13dfaccad23838e1be411"} Dec 03 14:03:05.634779 master-0 kubenswrapper[16176]: I1203 14:03:05.634603 16176 scope.go:117] "RemoveContainer" containerID="df7afb54b47f612999216bf1266d421e3dfe58a44fe13dfaccad23838e1be411" Dec 03 14:03:05.634779 master-0 kubenswrapper[16176]: I1203 14:03:05.634629 16176 scope.go:117] "RemoveContainer" containerID="2d667c8e5534b2ba9a4b068bb037fee023ee61772c5e8bb0ae3dfb586f8f8cf6" Dec 03 14:03:05.639012 master-0 kubenswrapper[16176]: I1203 14:03:05.638972 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="d4439eb52546785aae37b834477017e795b9034f4b166104f8bb03f8ec8b60b0" exitCode=0 Dec 03 14:03:05.639012 master-0 kubenswrapper[16176]: I1203 14:03:05.638998 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="ed89108d17473516cc693d23691659c9c34a6af1456a6ccf3615665c33a745cb" exitCode=0 Dec 03 14:03:05.639012 master-0 kubenswrapper[16176]: I1203 14:03:05.639007 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="d68d496052bf3db1426b92297d8fe9b84c2aa7e56373eba2857fc2b0fe99bf8e" exitCode=0 Dec 03 14:03:05.639012 master-0 kubenswrapper[16176]: I1203 14:03:05.639015 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="614fbef2371d69f02abb27d872356dac76b8a71901373650b25ef98e8905fcb1" exitCode=0 Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639023 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="29fc037238de736531b021b6909696c07ee576aafb65e7ef6058ae5092fa824f" exitCode=0 Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639032 16176 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="85777c7fd9763007e07cf6b8a5adab3a7194444324941d56bf95568a55ac023e" exitCode=0 Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639090 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"d4439eb52546785aae37b834477017e795b9034f4b166104f8bb03f8ec8b60b0"} Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639114 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"ed89108d17473516cc693d23691659c9c34a6af1456a6ccf3615665c33a745cb"} Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639126 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"d68d496052bf3db1426b92297d8fe9b84c2aa7e56373eba2857fc2b0fe99bf8e"} Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639137 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"614fbef2371d69f02abb27d872356dac76b8a71901373650b25ef98e8905fcb1"} Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639147 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"29fc037238de736531b021b6909696c07ee576aafb65e7ef6058ae5092fa824f"} Dec 03 14:03:05.639203 master-0 kubenswrapper[16176]: I1203 14:03:05.639157 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"85777c7fd9763007e07cf6b8a5adab3a7194444324941d56bf95568a55ac023e"} Dec 03 14:03:05.639599 master-0 kubenswrapper[16176]: I1203 14:03:05.639568 16176 scope.go:117] "RemoveContainer" containerID="85777c7fd9763007e07cf6b8a5adab3a7194444324941d56bf95568a55ac023e" Dec 03 14:03:05.639599 master-0 kubenswrapper[16176]: I1203 14:03:05.639591 16176 scope.go:117] "RemoveContainer" containerID="29fc037238de736531b021b6909696c07ee576aafb65e7ef6058ae5092fa824f" Dec 03 14:03:05.639599 master-0 kubenswrapper[16176]: I1203 14:03:05.639601 16176 scope.go:117] "RemoveContainer" containerID="614fbef2371d69f02abb27d872356dac76b8a71901373650b25ef98e8905fcb1" Dec 03 14:03:05.639721 master-0 kubenswrapper[16176]: I1203 14:03:05.639613 16176 scope.go:117] "RemoveContainer" containerID="d68d496052bf3db1426b92297d8fe9b84c2aa7e56373eba2857fc2b0fe99bf8e" Dec 03 14:03:05.639721 master-0 kubenswrapper[16176]: I1203 14:03:05.639622 16176 scope.go:117] "RemoveContainer" containerID="ed89108d17473516cc693d23691659c9c34a6af1456a6ccf3615665c33a745cb" Dec 03 14:03:05.639721 master-0 kubenswrapper[16176]: I1203 14:03:05.639632 16176 scope.go:117] "RemoveContainer" containerID="d4439eb52546785aae37b834477017e795b9034f4b166104f8bb03f8ec8b60b0" Dec 03 14:03:05.642166 master-0 kubenswrapper[16176]: I1203 14:03:05.642104 16176 generic.go:334] "Generic (PLEG): container finished" podID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" containerID="61b6fae7a82c65416e7eb61155697378feee9b64a22c33dc9655e8c1e290fe92" exitCode=0 Dec 03 14:03:05.642300 master-0 kubenswrapper[16176]: I1203 14:03:05.642176 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerDied","Data":"61b6fae7a82c65416e7eb61155697378feee9b64a22c33dc9655e8c1e290fe92"} Dec 03 14:03:05.642608 master-0 kubenswrapper[16176]: I1203 14:03:05.642574 16176 scope.go:117] "RemoveContainer" containerID="61b6fae7a82c65416e7eb61155697378feee9b64a22c33dc9655e8c1e290fe92" Dec 03 14:03:05.644722 master-0 kubenswrapper[16176]: I1203 14:03:05.644674 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-n24qb_6ef37bba-85d9-4303-80c0-aac3dc49d3d9/iptables-alerter/0.log" Dec 03 14:03:05.644801 master-0 kubenswrapper[16176]: I1203 14:03:05.644721 16176 generic.go:334] "Generic (PLEG): container finished" podID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" containerID="1cc3343c335d6a9b27f34bdbcb883b37627ae437e550513d255ee4be2095c4a9" exitCode=143 Dec 03 14:03:05.644801 master-0 kubenswrapper[16176]: I1203 14:03:05.644781 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerDied","Data":"1cc3343c335d6a9b27f34bdbcb883b37627ae437e550513d255ee4be2095c4a9"} Dec 03 14:03:05.646684 master-0 kubenswrapper[16176]: I1203 14:03:05.646653 16176 scope.go:117] "RemoveContainer" containerID="1cc3343c335d6a9b27f34bdbcb883b37627ae437e550513d255ee4be2095c4a9" Dec 03 14:03:05.650139 master-0 kubenswrapper[16176]: I1203 14:03:05.650091 16176 generic.go:334] "Generic (PLEG): container finished" podID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" containerID="bd1e1339c2b2a6cbfa32e7380e633e27308d98ec274ac7883938bbf44216022a" exitCode=0 Dec 03 14:03:05.650139 master-0 kubenswrapper[16176]: I1203 14:03:05.650127 16176 generic.go:334] "Generic (PLEG): container finished" podID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" containerID="cc16518683a304fcd778f593df3b44a196725927cbbdb6bf9c7b8406e574f8da" exitCode=0 Dec 03 14:03:05.650278 master-0 kubenswrapper[16176]: I1203 14:03:05.650185 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerDied","Data":"bd1e1339c2b2a6cbfa32e7380e633e27308d98ec274ac7883938bbf44216022a"} Dec 03 14:03:05.650339 master-0 kubenswrapper[16176]: I1203 14:03:05.650249 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerDied","Data":"cc16518683a304fcd778f593df3b44a196725927cbbdb6bf9c7b8406e574f8da"} Dec 03 14:03:05.650657 master-0 kubenswrapper[16176]: I1203 14:03:05.650580 16176 scope.go:117] "RemoveContainer" containerID="cc16518683a304fcd778f593df3b44a196725927cbbdb6bf9c7b8406e574f8da" Dec 03 14:03:05.650657 master-0 kubenswrapper[16176]: I1203 14:03:05.650598 16176 scope.go:117] "RemoveContainer" containerID="bd1e1339c2b2a6cbfa32e7380e633e27308d98ec274ac7883938bbf44216022a" Dec 03 14:03:05.653459 master-0 kubenswrapper[16176]: I1203 14:03:05.653413 16176 generic.go:334] "Generic (PLEG): container finished" podID="faa79e15-1875-4865-b5e0-aecd4c447bad" containerID="300bdbe13ceab4cb35cb2752094e93a2034759f3ecd4444f35e3550cfb8561c6" exitCode=0 Dec 03 14:03:05.653558 master-0 kubenswrapper[16176]: I1203 14:03:05.653494 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerDied","Data":"300bdbe13ceab4cb35cb2752094e93a2034759f3ecd4444f35e3550cfb8561c6"} Dec 03 14:03:05.654185 master-0 kubenswrapper[16176]: I1203 14:03:05.654147 16176 scope.go:117] "RemoveContainer" containerID="300bdbe13ceab4cb35cb2752094e93a2034759f3ecd4444f35e3550cfb8561c6" Dec 03 14:03:05.656043 master-0 kubenswrapper[16176]: I1203 14:03:05.655989 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59d99f9b7b-74sss_c95705e3-17ef-40fe-89e8-22586a32621b/insights-operator/2.log" Dec 03 14:03:05.656549 master-0 kubenswrapper[16176]: I1203 14:03:05.656510 16176 generic.go:334] "Generic (PLEG): container finished" podID="c95705e3-17ef-40fe-89e8-22586a32621b" containerID="592afbc8ad5768dc17fb9b4954572832dfc48ca07ff7ac0a602707299294e300" exitCode=2 Dec 03 14:03:05.656617 master-0 kubenswrapper[16176]: I1203 14:03:05.656586 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerDied","Data":"592afbc8ad5768dc17fb9b4954572832dfc48ca07ff7ac0a602707299294e300"} Dec 03 14:03:05.657017 master-0 kubenswrapper[16176]: I1203 14:03:05.656980 16176 scope.go:117] "RemoveContainer" containerID="592afbc8ad5768dc17fb9b4954572832dfc48ca07ff7ac0a602707299294e300" Dec 03 14:03:05.657318 master-0 kubenswrapper[16176]: E1203 14:03:05.657279 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-59d99f9b7b-74sss_openshift-insights(c95705e3-17ef-40fe-89e8-22586a32621b)\"" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:03:05.658942 master-0 kubenswrapper[16176]: I1203 14:03:05.658846 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-42hmk_19c2a40b-213c-42f1-9459-87c2e780a75f/kube-multus-additional-cni-plugins/0.log" Dec 03 14:03:05.662424 master-0 kubenswrapper[16176]: I1203 14:03:05.662385 16176 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="faaade0e087b9881354c66070961350c192c84a21bcde13985c46c7344e4fb17" exitCode=143 Dec 03 14:03:05.662497 master-0 kubenswrapper[16176]: I1203 14:03:05.662458 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"faaade0e087b9881354c66070961350c192c84a21bcde13985c46c7344e4fb17"} Dec 03 14:03:05.663013 master-0 kubenswrapper[16176]: I1203 14:03:05.662981 16176 scope.go:117] "RemoveContainer" containerID="faaade0e087b9881354c66070961350c192c84a21bcde13985c46c7344e4fb17" Dec 03 14:03:05.665046 master-0 kubenswrapper[16176]: I1203 14:03:05.664988 16176 generic.go:334] "Generic (PLEG): container finished" podID="0535e784-8e28-4090-aa2e-df937910767c" containerID="97750e8f74736f079d18144b0d15743bbfdb9d2f2d7177cc7d677c65b0c4a40e" exitCode=0 Dec 03 14:03:05.665133 master-0 kubenswrapper[16176]: I1203 14:03:05.665058 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerDied","Data":"97750e8f74736f079d18144b0d15743bbfdb9d2f2d7177cc7d677c65b0c4a40e"} Dec 03 14:03:05.665455 master-0 kubenswrapper[16176]: I1203 14:03:05.665416 16176 scope.go:117] "RemoveContainer" containerID="97750e8f74736f079d18144b0d15743bbfdb9d2f2d7177cc7d677c65b0c4a40e" Dec 03 14:03:05.666955 master-0 kubenswrapper[16176]: I1203 14:03:05.666919 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_c98a8d85d3901d33f6fe192bdc7172aa/startup-monitor/0.log" Dec 03 14:03:05.667044 master-0 kubenswrapper[16176]: I1203 14:03:05.666965 16176 generic.go:334] "Generic (PLEG): container finished" podID="c98a8d85d3901d33f6fe192bdc7172aa" containerID="f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac" exitCode=255 Dec 03 14:03:05.667044 master-0 kubenswrapper[16176]: I1203 14:03:05.667022 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerDied","Data":"f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac"} Dec 03 14:03:05.667427 master-0 kubenswrapper[16176]: I1203 14:03:05.667384 16176 scope.go:117] "RemoveContainer" containerID="f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac" Dec 03 14:03:05.670386 master-0 kubenswrapper[16176]: I1203 14:03:05.670349 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-85dbd94574-8jfp5_bcc78129-4a81-410e-9a42-b12043b5a75a/ingress-operator/0.log" Dec 03 14:03:05.670478 master-0 kubenswrapper[16176]: I1203 14:03:05.670406 16176 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="5d702f79deb55550ea5730b11fd4eabda1b93c216210a04d377b3af5044f1982" exitCode=0 Dec 03 14:03:05.670478 master-0 kubenswrapper[16176]: I1203 14:03:05.670425 16176 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="92ac936b00521b1c9793237e1a0895eec6abe954c1fd85fa13ba44ec6da4fd3b" exitCode=0 Dec 03 14:03:05.670588 master-0 kubenswrapper[16176]: I1203 14:03:05.670466 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerDied","Data":"5d702f79deb55550ea5730b11fd4eabda1b93c216210a04d377b3af5044f1982"} Dec 03 14:03:05.670588 master-0 kubenswrapper[16176]: I1203 14:03:05.670553 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerDied","Data":"92ac936b00521b1c9793237e1a0895eec6abe954c1fd85fa13ba44ec6da4fd3b"} Dec 03 14:03:05.671404 master-0 kubenswrapper[16176]: I1203 14:03:05.671368 16176 scope.go:117] "RemoveContainer" containerID="5d702f79deb55550ea5730b11fd4eabda1b93c216210a04d377b3af5044f1982" Dec 03 14:03:05.671404 master-0 kubenswrapper[16176]: I1203 14:03:05.671394 16176 scope.go:117] "RemoveContainer" containerID="92ac936b00521b1c9793237e1a0895eec6abe954c1fd85fa13ba44ec6da4fd3b" Dec 03 14:03:05.672840 master-0 kubenswrapper[16176]: I1203 14:03:05.672805 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-pcchm_6d38d102-4efe-4ed3-ae23-b1e295cdaccd/network-check-target-container/0.log" Dec 03 14:03:05.672913 master-0 kubenswrapper[16176]: I1203 14:03:05.672853 16176 generic.go:334] "Generic (PLEG): container finished" podID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" containerID="403adb4ba26d18c6883b3621c8cccb4164ce2519226eafb71472f42c8f4a82f4" exitCode=2 Dec 03 14:03:05.672960 master-0 kubenswrapper[16176]: I1203 14:03:05.672898 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerDied","Data":"403adb4ba26d18c6883b3621c8cccb4164ce2519226eafb71472f42c8f4a82f4"} Dec 03 14:03:05.673818 master-0 kubenswrapper[16176]: I1203 14:03:05.673776 16176 scope.go:117] "RemoveContainer" containerID="403adb4ba26d18c6883b3621c8cccb4164ce2519226eafb71472f42c8f4a82f4" Dec 03 14:03:05.683179 master-0 kubenswrapper[16176]: I1203 14:03:05.683118 16176 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="da69afae29446aeabbed105db5e7cd5bf817f6d679bc553eefcd18107527ad0c" exitCode=0 Dec 03 14:03:05.683179 master-0 kubenswrapper[16176]: I1203 14:03:05.683194 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"da69afae29446aeabbed105db5e7cd5bf817f6d679bc553eefcd18107527ad0c"} Dec 03 14:03:05.684040 master-0 kubenswrapper[16176]: I1203 14:03:05.684000 16176 scope.go:117] "RemoveContainer" containerID="da69afae29446aeabbed105db5e7cd5bf817f6d679bc553eefcd18107527ad0c" Dec 03 14:03:05.686921 master-0 kubenswrapper[16176]: I1203 14:03:05.686885 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/0.log" Dec 03 14:03:05.686993 master-0 kubenswrapper[16176]: I1203 14:03:05.686929 16176 generic.go:334] "Generic (PLEG): container finished" podID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" containerID="62e170df95a1c0ac2f850d72b2d416e6feeb1fc16efb14dae262ed12df7400ca" exitCode=2 Dec 03 14:03:05.687055 master-0 kubenswrapper[16176]: I1203 14:03:05.686999 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerDied","Data":"62e170df95a1c0ac2f850d72b2d416e6feeb1fc16efb14dae262ed12df7400ca"} Dec 03 14:03:05.687769 master-0 kubenswrapper[16176]: I1203 14:03:05.687737 16176 scope.go:117] "RemoveContainer" containerID="62e170df95a1c0ac2f850d72b2d416e6feeb1fc16efb14dae262ed12df7400ca" Dec 03 14:03:05.689094 master-0 kubenswrapper[16176]: I1203 14:03:05.689062 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-648d88c756-vswh8_62f94ae7-6043-4761-a16b-e0f072b1364b/console/0.log" Dec 03 14:03:05.689168 master-0 kubenswrapper[16176]: I1203 14:03:05.689103 16176 generic.go:334] "Generic (PLEG): container finished" podID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerID="921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a" exitCode=2 Dec 03 14:03:05.689230 master-0 kubenswrapper[16176]: I1203 14:03:05.689162 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerDied","Data":"921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a"} Dec 03 14:03:05.689585 master-0 kubenswrapper[16176]: I1203 14:03:05.689556 16176 scope.go:117] "RemoveContainer" containerID="921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a" Dec 03 14:03:05.691722 master-0 kubenswrapper[16176]: I1203 14:03:05.691668 16176 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="fbdaf01cbfe994f2dae86341472aa776f7166e138673e691ad073797fa8ee297" exitCode=0 Dec 03 14:03:05.691801 master-0 kubenswrapper[16176]: I1203 14:03:05.691744 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"fbdaf01cbfe994f2dae86341472aa776f7166e138673e691ad073797fa8ee297"} Dec 03 14:03:05.692192 master-0 kubenswrapper[16176]: I1203 14:03:05.692162 16176 scope.go:117] "RemoveContainer" containerID="fbdaf01cbfe994f2dae86341472aa776f7166e138673e691ad073797fa8ee297" Dec 03 14:03:05.694001 master-0 kubenswrapper[16176]: I1203 14:03:05.693956 16176 generic.go:334] "Generic (PLEG): container finished" podID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" containerID="e16025ed04953d73a838df3bbba4fe82854213e488b25790d3df13f916b39c4b" exitCode=0 Dec 03 14:03:05.694132 master-0 kubenswrapper[16176]: I1203 14:03:05.694086 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerDied","Data":"e16025ed04953d73a838df3bbba4fe82854213e488b25790d3df13f916b39c4b"} Dec 03 14:03:05.694628 master-0 kubenswrapper[16176]: I1203 14:03:05.694598 16176 scope.go:117] "RemoveContainer" containerID="e16025ed04953d73a838df3bbba4fe82854213e488b25790d3df13f916b39c4b" Dec 03 14:03:05.697022 master-0 kubenswrapper[16176]: I1203 14:03:05.696995 16176 generic.go:334] "Generic (PLEG): container finished" podID="d7d6a05e-beee-40e9-b376-5c22e285b27a" containerID="464d61112584f8d327c8a89a0106ed622430e5b4bbd85cdaef9caf5124f5db07" exitCode=0 Dec 03 14:03:05.697084 master-0 kubenswrapper[16176]: I1203 14:03:05.697055 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerDied","Data":"464d61112584f8d327c8a89a0106ed622430e5b4bbd85cdaef9caf5124f5db07"} Dec 03 14:03:05.697644 master-0 kubenswrapper[16176]: I1203 14:03:05.697617 16176 scope.go:117] "RemoveContainer" containerID="464d61112584f8d327c8a89a0106ed622430e5b4bbd85cdaef9caf5124f5db07" Dec 03 14:03:05.699359 master-0 kubenswrapper[16176]: I1203 14:03:05.699150 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-7486ff55f-wcnxg_e9f484c1-1564-49c7-a43d-bd8b971cea20/machine-api-operator/0.log" Dec 03 14:03:05.699645 master-0 kubenswrapper[16176]: I1203 14:03:05.699617 16176 generic.go:334] "Generic (PLEG): container finished" podID="e9f484c1-1564-49c7-a43d-bd8b971cea20" containerID="c881085069f63bacb400a6fed6cb8da6b8a20dafb74617a181cac7ed05a9f546" exitCode=2 Dec 03 14:03:05.699645 master-0 kubenswrapper[16176]: I1203 14:03:05.699641 16176 generic.go:334] "Generic (PLEG): container finished" podID="e9f484c1-1564-49c7-a43d-bd8b971cea20" containerID="fed7f1938d835b6bb43ab21dcc685c6a28234547c348fa1bb19896359af832d6" exitCode=0 Dec 03 14:03:05.699755 master-0 kubenswrapper[16176]: I1203 14:03:05.699655 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerDied","Data":"c881085069f63bacb400a6fed6cb8da6b8a20dafb74617a181cac7ed05a9f546"} Dec 03 14:03:05.699755 master-0 kubenswrapper[16176]: I1203 14:03:05.699687 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerDied","Data":"fed7f1938d835b6bb43ab21dcc685c6a28234547c348fa1bb19896359af832d6"} Dec 03 14:03:05.700152 master-0 kubenswrapper[16176]: I1203 14:03:05.700125 16176 scope.go:117] "RemoveContainer" containerID="fed7f1938d835b6bb43ab21dcc685c6a28234547c348fa1bb19896359af832d6" Dec 03 14:03:05.700238 master-0 kubenswrapper[16176]: I1203 14:03:05.700154 16176 scope.go:117] "RemoveContainer" containerID="c881085069f63bacb400a6fed6cb8da6b8a20dafb74617a181cac7ed05a9f546" Dec 03 14:03:05.702545 master-0 kubenswrapper[16176]: I1203 14:03:05.702517 16176 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="3448e0a3f35606c9594f8a7bf33b0cdd9fd90d740c89dc5c58476c524a180d4e" exitCode=0 Dec 03 14:03:05.702608 master-0 kubenswrapper[16176]: I1203 14:03:05.702582 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"3448e0a3f35606c9594f8a7bf33b0cdd9fd90d740c89dc5c58476c524a180d4e"} Dec 03 14:03:05.703126 master-0 kubenswrapper[16176]: I1203 14:03:05.703104 16176 scope.go:117] "RemoveContainer" containerID="3448e0a3f35606c9594f8a7bf33b0cdd9fd90d740c89dc5c58476c524a180d4e" Dec 03 14:03:05.708844 master-0 kubenswrapper[16176]: I1203 14:03:05.708708 16176 generic.go:334] "Generic (PLEG): container finished" podID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" containerID="5fe6c28d1bda0d80f5409b45f5f8db53ee77efab8e8303d60d2351d01ed9439c" exitCode=0 Dec 03 14:03:05.708844 master-0 kubenswrapper[16176]: I1203 14:03:05.708757 16176 generic.go:334] "Generic (PLEG): container finished" podID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" containerID="18fec28d3c23557c11d08f9c713623d04b2f8661479f3eb4912bd29ec38e3095" exitCode=0 Dec 03 14:03:05.708844 master-0 kubenswrapper[16176]: I1203 14:03:05.708821 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerDied","Data":"5fe6c28d1bda0d80f5409b45f5f8db53ee77efab8e8303d60d2351d01ed9439c"} Dec 03 14:03:05.709027 master-0 kubenswrapper[16176]: I1203 14:03:05.708862 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerDied","Data":"18fec28d3c23557c11d08f9c713623d04b2f8661479f3eb4912bd29ec38e3095"} Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.710165 16176 scope.go:117] "RemoveContainer" containerID="18fec28d3c23557c11d08f9c713623d04b2f8661479f3eb4912bd29ec38e3095" Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.710196 16176 scope.go:117] "RemoveContainer" containerID="5fe6c28d1bda0d80f5409b45f5f8db53ee77efab8e8303d60d2351d01ed9439c" Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.711808 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-547cc9cc49-kqs4k_b02244d0-f4ef-4702-950d-9e3fb5ced128/monitoring-plugin/0.log" Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.711853 16176 generic.go:334] "Generic (PLEG): container finished" podID="b02244d0-f4ef-4702-950d-9e3fb5ced128" containerID="9058f4b410f7256df900169f0b2bf588775e3621f3e9856797584adba5ed0a94" exitCode=2 Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.713206 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerDied","Data":"9058f4b410f7256df900169f0b2bf588775e3621f3e9856797584adba5ed0a94"} Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.714568 16176 generic.go:334] "Generic (PLEG): container finished" podID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" containerID="2fbde32eb18abfa7b6e72ffc4634a409c0aca270b847e310a795438cc6476311" exitCode=0 Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.714708 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerDied","Data":"2fbde32eb18abfa7b6e72ffc4634a409c0aca270b847e310a795438cc6476311"} Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.714882 16176 scope.go:117] "RemoveContainer" containerID="9058f4b410f7256df900169f0b2bf588775e3621f3e9856797584adba5ed0a94" Dec 03 14:03:05.718080 master-0 kubenswrapper[16176]: I1203 14:03:05.715822 16176 scope.go:117] "RemoveContainer" containerID="2fbde32eb18abfa7b6e72ffc4634a409c0aca270b847e310a795438cc6476311" Dec 03 14:03:05.719370 master-0 kubenswrapper[16176]: I1203 14:03:05.719146 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"43ae0856c507c2bff378887cd0a84b438d3de6bb78d726d6ebc950e521af94bd"} Dec 03 14:03:05.720089 master-0 kubenswrapper[16176]: I1203 14:03:05.719211 16176 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="43ae0856c507c2bff378887cd0a84b438d3de6bb78d726d6ebc950e521af94bd" exitCode=0 Dec 03 14:03:05.720768 master-0 kubenswrapper[16176]: I1203 14:03:05.720701 16176 scope.go:117] "RemoveContainer" containerID="43ae0856c507c2bff378887cd0a84b438d3de6bb78d726d6ebc950e521af94bd" Dec 03 14:03:05.725994 master-0 kubenswrapper[16176]: I1203 14:03:05.725885 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2" exitCode=0 Dec 03 14:03:05.725994 master-0 kubenswrapper[16176]: I1203 14:03:05.725915 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856" exitCode=0 Dec 03 14:03:05.725994 master-0 kubenswrapper[16176]: I1203 14:03:05.725928 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c" exitCode=0 Dec 03 14:03:05.725994 master-0 kubenswrapper[16176]: I1203 14:03:05.725938 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2" exitCode=0 Dec 03 14:03:05.725994 master-0 kubenswrapper[16176]: I1203 14:03:05.725947 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd" exitCode=0 Dec 03 14:03:05.726230 master-0 kubenswrapper[16176]: I1203 14:03:05.726010 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2"} Dec 03 14:03:05.726230 master-0 kubenswrapper[16176]: I1203 14:03:05.726043 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856"} Dec 03 14:03:05.726230 master-0 kubenswrapper[16176]: I1203 14:03:05.726072 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c"} Dec 03 14:03:05.726230 master-0 kubenswrapper[16176]: I1203 14:03:05.726087 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2"} Dec 03 14:03:05.726230 master-0 kubenswrapper[16176]: I1203 14:03:05.726100 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd"} Dec 03 14:03:05.726740 master-0 kubenswrapper[16176]: I1203 14:03:05.726654 16176 scope.go:117] "RemoveContainer" containerID="248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd" Dec 03 14:03:05.726740 master-0 kubenswrapper[16176]: I1203 14:03:05.726683 16176 scope.go:117] "RemoveContainer" containerID="644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2" Dec 03 14:03:05.726740 master-0 kubenswrapper[16176]: I1203 14:03:05.726698 16176 scope.go:117] "RemoveContainer" containerID="13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c" Dec 03 14:03:05.726740 master-0 kubenswrapper[16176]: I1203 14:03:05.726710 16176 scope.go:117] "RemoveContainer" containerID="2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856" Dec 03 14:03:05.726740 master-0 kubenswrapper[16176]: I1203 14:03:05.726723 16176 scope.go:117] "RemoveContainer" containerID="38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2" Dec 03 14:03:05.729153 master-0 kubenswrapper[16176]: I1203 14:03:05.729116 16176 generic.go:334] "Generic (PLEG): container finished" podID="8c6fa89f-268c-477b-9f04-238d2305cc89" containerID="0c03e8e688624f8100b3363cdd3745128b8e51e4e7927b4fa1f8b7fa1283a77a" exitCode=0 Dec 03 14:03:05.729153 master-0 kubenswrapper[16176]: I1203 14:03:05.729145 16176 generic.go:334] "Generic (PLEG): container finished" podID="8c6fa89f-268c-477b-9f04-238d2305cc89" containerID="3a804dba6904156085c90f6cda9cd5712202105d18772a319912b1b6826d11b6" exitCode=0 Dec 03 14:03:05.729302 master-0 kubenswrapper[16176]: I1203 14:03:05.729186 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerDied","Data":"0c03e8e688624f8100b3363cdd3745128b8e51e4e7927b4fa1f8b7fa1283a77a"} Dec 03 14:03:05.729302 master-0 kubenswrapper[16176]: I1203 14:03:05.729211 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerDied","Data":"3a804dba6904156085c90f6cda9cd5712202105d18772a319912b1b6826d11b6"} Dec 03 14:03:05.729614 master-0 kubenswrapper[16176]: I1203 14:03:05.729584 16176 scope.go:117] "RemoveContainer" containerID="3a804dba6904156085c90f6cda9cd5712202105d18772a319912b1b6826d11b6" Dec 03 14:03:05.729663 master-0 kubenswrapper[16176]: I1203 14:03:05.729613 16176 scope.go:117] "RemoveContainer" containerID="0c03e8e688624f8100b3363cdd3745128b8e51e4e7927b4fa1f8b7fa1283a77a" Dec 03 14:03:05.732339 master-0 kubenswrapper[16176]: I1203 14:03:05.732275 16176 generic.go:334] "Generic (PLEG): container finished" podID="52100521-67e9-40c9-887c-eda6560f06e0" containerID="d59cdedb5194c6940e4cccb82c7b05bdcd7e1bcbc39bc385216aa8a7a9d70f09" exitCode=0 Dec 03 14:03:05.732526 master-0 kubenswrapper[16176]: I1203 14:03:05.732487 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerDied","Data":"d59cdedb5194c6940e4cccb82c7b05bdcd7e1bcbc39bc385216aa8a7a9d70f09"} Dec 03 14:03:05.733272 master-0 kubenswrapper[16176]: I1203 14:03:05.733226 16176 scope.go:117] "RemoveContainer" containerID="d59cdedb5194c6940e4cccb82c7b05bdcd7e1bcbc39bc385216aa8a7a9d70f09" Dec 03 14:03:05.742826 master-0 kubenswrapper[16176]: I1203 14:03:05.742568 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/0.log" Dec 03 14:03:05.748878 master-0 kubenswrapper[16176]: I1203 14:03:05.748828 16176 generic.go:334] "Generic (PLEG): container finished" podID="da583723-b3ad-4a6f-b586-09b739bd7f8c" containerID="a9ab7af0c7f5cba028485cf75bcb8b2472f6f17e4fd93b6731c12213f34fc92b" exitCode=0 Dec 03 14:03:05.748989 master-0 kubenswrapper[16176]: I1203 14:03:05.748936 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerDied","Data":"a9ab7af0c7f5cba028485cf75bcb8b2472f6f17e4fd93b6731c12213f34fc92b"} Dec 03 14:03:05.749493 master-0 kubenswrapper[16176]: I1203 14:03:05.749462 16176 scope.go:117] "RemoveContainer" containerID="a9ab7af0c7f5cba028485cf75bcb8b2472f6f17e4fd93b6731c12213f34fc92b" Dec 03 14:03:05.755911 master-0 kubenswrapper[16176]: I1203 14:03:05.755856 16176 generic.go:334] "Generic (PLEG): container finished" podID="b3eef3ef-f954-4e47-92b4-0155bc27332d" containerID="987691eed2494b5e7a9d7c407aeb51b5d0ff0a9c31c9a683dc5b41ae08c3b546" exitCode=0 Dec 03 14:03:05.756020 master-0 kubenswrapper[16176]: I1203 14:03:05.755909 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerDied","Data":"987691eed2494b5e7a9d7c407aeb51b5d0ff0a9c31c9a683dc5b41ae08c3b546"} Dec 03 14:03:05.756629 master-0 kubenswrapper[16176]: I1203 14:03:05.756601 16176 scope.go:117] "RemoveContainer" containerID="987691eed2494b5e7a9d7c407aeb51b5d0ff0a9c31c9a683dc5b41ae08c3b546" Dec 03 14:03:05.759864 master-0 kubenswrapper[16176]: I1203 14:03:05.759826 16176 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="8c5297427b396fa8732c10042afbb91ea37eb70462659d5bb64cdcf4bc7a43ac" exitCode=0 Dec 03 14:03:05.759864 master-0 kubenswrapper[16176]: I1203 14:03:05.759853 16176 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="b70d802ed5349d93be4b21929843e3c3a0b76580d514cea5aa17e96cf9684487" exitCode=0 Dec 03 14:03:05.759864 master-0 kubenswrapper[16176]: I1203 14:03:05.759862 16176 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="5b897834f693b15d6c895d9748f4236c069b41b71d42c3fce4d9a8363e167436" exitCode=0 Dec 03 14:03:05.760032 master-0 kubenswrapper[16176]: I1203 14:03:05.759904 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"8c5297427b396fa8732c10042afbb91ea37eb70462659d5bb64cdcf4bc7a43ac"} Dec 03 14:03:05.760032 master-0 kubenswrapper[16176]: I1203 14:03:05.759927 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"b70d802ed5349d93be4b21929843e3c3a0b76580d514cea5aa17e96cf9684487"} Dec 03 14:03:05.760032 master-0 kubenswrapper[16176]: I1203 14:03:05.759939 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"5b897834f693b15d6c895d9748f4236c069b41b71d42c3fce4d9a8363e167436"} Dec 03 14:03:05.760308 master-0 kubenswrapper[16176]: I1203 14:03:05.760280 16176 scope.go:117] "RemoveContainer" containerID="5b897834f693b15d6c895d9748f4236c069b41b71d42c3fce4d9a8363e167436" Dec 03 14:03:05.760308 master-0 kubenswrapper[16176]: I1203 14:03:05.760306 16176 scope.go:117] "RemoveContainer" containerID="b70d802ed5349d93be4b21929843e3c3a0b76580d514cea5aa17e96cf9684487" Dec 03 14:03:05.760417 master-0 kubenswrapper[16176]: I1203 14:03:05.760317 16176 scope.go:117] "RemoveContainer" containerID="8c5297427b396fa8732c10042afbb91ea37eb70462659d5bb64cdcf4bc7a43ac" Dec 03 14:03:05.763656 master-0 kubenswrapper[16176]: I1203 14:03:05.762590 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-vkpv4_e3675c78-1902-4b92-8a93-cf2dc316f060/serve-healthcheck-canary/0.log" Dec 03 14:03:05.763656 master-0 kubenswrapper[16176]: I1203 14:03:05.762630 16176 generic.go:334] "Generic (PLEG): container finished" podID="e3675c78-1902-4b92-8a93-cf2dc316f060" containerID="fc86fde0a5d65413e6a1c92cecb6d204dac3c0ea36aa50c66d9cbb60436631fe" exitCode=2 Dec 03 14:03:05.763656 master-0 kubenswrapper[16176]: I1203 14:03:05.762688 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerDied","Data":"fc86fde0a5d65413e6a1c92cecb6d204dac3c0ea36aa50c66d9cbb60436631fe"} Dec 03 14:03:05.763656 master-0 kubenswrapper[16176]: I1203 14:03:05.763272 16176 scope.go:117] "RemoveContainer" containerID="fc86fde0a5d65413e6a1c92cecb6d204dac3c0ea36aa50c66d9cbb60436631fe" Dec 03 14:03:05.765627 master-0 kubenswrapper[16176]: I1203 14:03:05.765599 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-57cbc648f8-q4cgg_74e39dce-29d5-4b2a-ab19-386b6cdae94d/openshift-state-metrics/0.log" Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766583 16176 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="d6c55f6716708ffd9697648df2c909b367e721e7331928d93a6855113e7545e3" exitCode=2 Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766600 16176 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="e134bd03f8d94ccc31b157a88dbe27e9d8f8d599da864933d7d0eaca01de317a" exitCode=0 Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766609 16176 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="5dfbbb0a992a6c3399210f337dd1fc3bad574cdd201a086dd6a45a86b62681a3" exitCode=0 Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766649 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"d6c55f6716708ffd9697648df2c909b367e721e7331928d93a6855113e7545e3"} Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766666 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"e134bd03f8d94ccc31b157a88dbe27e9d8f8d599da864933d7d0eaca01de317a"} Dec 03 14:03:05.766696 master-0 kubenswrapper[16176]: I1203 14:03:05.766677 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"5dfbbb0a992a6c3399210f337dd1fc3bad574cdd201a086dd6a45a86b62681a3"} Dec 03 14:03:05.767072 master-0 kubenswrapper[16176]: I1203 14:03:05.767051 16176 scope.go:117] "RemoveContainer" containerID="5dfbbb0a992a6c3399210f337dd1fc3bad574cdd201a086dd6a45a86b62681a3" Dec 03 14:03:05.767109 master-0 kubenswrapper[16176]: I1203 14:03:05.767077 16176 scope.go:117] "RemoveContainer" containerID="e134bd03f8d94ccc31b157a88dbe27e9d8f8d599da864933d7d0eaca01de317a" Dec 03 14:03:05.767109 master-0 kubenswrapper[16176]: I1203 14:03:05.767090 16176 scope.go:117] "RemoveContainer" containerID="d6c55f6716708ffd9697648df2c909b367e721e7331928d93a6855113e7545e3" Dec 03 14:03:05.769731 master-0 kubenswrapper[16176]: I1203 14:03:05.769473 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5d7cd7f9-2hp75_4dd1d142-6569-438d-b0c2-582aed44812d/console/0.log" Dec 03 14:03:05.769731 master-0 kubenswrapper[16176]: I1203 14:03:05.769521 16176 generic.go:334] "Generic (PLEG): container finished" podID="4dd1d142-6569-438d-b0c2-582aed44812d" containerID="bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1" exitCode=2 Dec 03 14:03:05.769731 master-0 kubenswrapper[16176]: I1203 14:03:05.769573 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerDied","Data":"bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1"} Dec 03 14:03:05.770329 master-0 kubenswrapper[16176]: I1203 14:03:05.770055 16176 scope.go:117] "RemoveContainer" containerID="bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1" Dec 03 14:03:05.773279 master-0 kubenswrapper[16176]: I1203 14:03:05.773231 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-pvrfs_eecc43f5-708f-4395-98cc-696b243d6321/machine-config-server/0.log" Dec 03 14:03:05.773358 master-0 kubenswrapper[16176]: I1203 14:03:05.773295 16176 generic.go:334] "Generic (PLEG): container finished" podID="eecc43f5-708f-4395-98cc-696b243d6321" containerID="416059ff0cc777d3d9d6dfaa42a36a430fbc17cbb4e53827d4fbb502c6e34e99" exitCode=2 Dec 03 14:03:05.773392 master-0 kubenswrapper[16176]: I1203 14:03:05.773353 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerDied","Data":"416059ff0cc777d3d9d6dfaa42a36a430fbc17cbb4e53827d4fbb502c6e34e99"} Dec 03 14:03:05.773742 master-0 kubenswrapper[16176]: I1203 14:03:05.773708 16176 scope.go:117] "RemoveContainer" containerID="416059ff0cc777d3d9d6dfaa42a36a430fbc17cbb4e53827d4fbb502c6e34e99" Dec 03 14:03:05.776513 master-0 kubenswrapper[16176]: I1203 14:03:05.776493 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7dcc7f9bd6-68wml_8eee1d96-2f58-41a6-ae51-c158b29fc813/kube-state-metrics/0.log" Dec 03 14:03:05.776579 master-0 kubenswrapper[16176]: I1203 14:03:05.776537 16176 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="e1c459dde35a615ba7882fda79fdee2dd95b10a24c239fcce598bdf3ad30914a" exitCode=0 Dec 03 14:03:05.776579 master-0 kubenswrapper[16176]: I1203 14:03:05.776549 16176 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="061547e7d7b4af7eb58e2c7231ae020567b904227a23d4c97a1b77417b710997" exitCode=0 Dec 03 14:03:05.776579 master-0 kubenswrapper[16176]: I1203 14:03:05.776558 16176 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="31a1968fe005da1858b89a5e00cb177cc6f28af82f98edf135f3f1121701bc4c" exitCode=2 Dec 03 14:03:05.776676 master-0 kubenswrapper[16176]: I1203 14:03:05.776595 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"e1c459dde35a615ba7882fda79fdee2dd95b10a24c239fcce598bdf3ad30914a"} Dec 03 14:03:05.776676 master-0 kubenswrapper[16176]: I1203 14:03:05.776612 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"061547e7d7b4af7eb58e2c7231ae020567b904227a23d4c97a1b77417b710997"} Dec 03 14:03:05.776676 master-0 kubenswrapper[16176]: I1203 14:03:05.776622 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"31a1968fe005da1858b89a5e00cb177cc6f28af82f98edf135f3f1121701bc4c"} Dec 03 14:03:05.776911 master-0 kubenswrapper[16176]: I1203 14:03:05.776890 16176 scope.go:117] "RemoveContainer" containerID="31a1968fe005da1858b89a5e00cb177cc6f28af82f98edf135f3f1121701bc4c" Dec 03 14:03:05.776911 master-0 kubenswrapper[16176]: I1203 14:03:05.776911 16176 scope.go:117] "RemoveContainer" containerID="061547e7d7b4af7eb58e2c7231ae020567b904227a23d4c97a1b77417b710997" Dec 03 14:03:05.776992 master-0 kubenswrapper[16176]: I1203 14:03:05.776920 16176 scope.go:117] "RemoveContainer" containerID="e1c459dde35a615ba7882fda79fdee2dd95b10a24c239fcce598bdf3ad30914a" Dec 03 14:03:05.781424 master-0 kubenswrapper[16176]: I1203 14:03:05.781378 16176 generic.go:334] "Generic (PLEG): container finished" podID="1c562495-1290-4792-b4b2-639faa594ae2" containerID="566f323c45d81781fedd2bdc80905670d4cd7c9f187134067cb868a4c67c719d" exitCode=0 Dec 03 14:03:05.781424 master-0 kubenswrapper[16176]: I1203 14:03:05.781429 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerDied","Data":"566f323c45d81781fedd2bdc80905670d4cd7c9f187134067cb868a4c67c719d"} Dec 03 14:03:05.782463 master-0 kubenswrapper[16176]: I1203 14:03:05.781854 16176 scope.go:117] "RemoveContainer" containerID="566f323c45d81781fedd2bdc80905670d4cd7c9f187134067cb868a4c67c719d" Dec 03 14:03:05.787737 master-0 kubenswrapper[16176]: I1203 14:03:05.787688 16176 generic.go:334] "Generic (PLEG): container finished" podID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" containerID="1819ee8dddbb1231277ada301d8d1ef733f0d9656e6fbb70ea5bc8f0833fffdf" exitCode=0 Dec 03 14:03:05.787838 master-0 kubenswrapper[16176]: I1203 14:03:05.787792 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerDied","Data":"1819ee8dddbb1231277ada301d8d1ef733f0d9656e6fbb70ea5bc8f0833fffdf"} Dec 03 14:03:05.788404 master-0 kubenswrapper[16176]: I1203 14:03:05.788377 16176 scope.go:117] "RemoveContainer" containerID="1819ee8dddbb1231277ada301d8d1ef733f0d9656e6fbb70ea5bc8f0833fffdf" Dec 03 14:03:05.790787 master-0 kubenswrapper[16176]: I1203 14:03:05.790755 16176 generic.go:334] "Generic (PLEG): container finished" podID="7663a25e-236d-4b1d-83ce-733ab146dee3" containerID="f1973b4466f42fb61df6cd77cbfef702ec93663db7196dad05b5c33c247207c2" exitCode=0 Dec 03 14:03:05.790787 master-0 kubenswrapper[16176]: I1203 14:03:05.790777 16176 generic.go:334] "Generic (PLEG): container finished" podID="7663a25e-236d-4b1d-83ce-733ab146dee3" containerID="079a77d7b6e77ececcba1148cb7a0f749581bf55fe762e3242f107e6b8f5cdca" exitCode=0 Dec 03 14:03:05.790897 master-0 kubenswrapper[16176]: I1203 14:03:05.790817 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerDied","Data":"f1973b4466f42fb61df6cd77cbfef702ec93663db7196dad05b5c33c247207c2"} Dec 03 14:03:05.790897 master-0 kubenswrapper[16176]: I1203 14:03:05.790840 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerDied","Data":"079a77d7b6e77ececcba1148cb7a0f749581bf55fe762e3242f107e6b8f5cdca"} Dec 03 14:03:05.791191 master-0 kubenswrapper[16176]: I1203 14:03:05.791165 16176 scope.go:117] "RemoveContainer" containerID="079a77d7b6e77ececcba1148cb7a0f749581bf55fe762e3242f107e6b8f5cdca" Dec 03 14:03:05.791191 master-0 kubenswrapper[16176]: I1203 14:03:05.791185 16176 scope.go:117] "RemoveContainer" containerID="f1973b4466f42fb61df6cd77cbfef702ec93663db7196dad05b5c33c247207c2" Dec 03 14:03:05.807517 master-0 kubenswrapper[16176]: I1203 14:03:05.807470 16176 generic.go:334] "Generic (PLEG): container finished" podID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" containerID="ab5e3a3e803ad2fa9e552b5c448abfac370df50c48257b25a5dcf38408830685" exitCode=0 Dec 03 14:03:05.807517 master-0 kubenswrapper[16176]: I1203 14:03:05.807511 16176 generic.go:334] "Generic (PLEG): container finished" podID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" containerID="2a09165acb6a766506a8fe7bf33d8e34418e6c5b87698b5a708a45feb615a317" exitCode=0 Dec 03 14:03:05.808507 master-0 kubenswrapper[16176]: I1203 14:03:05.808459 16176 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e3deb6aaa7ca82dd236253a197e02b" path="/var/lib/kubelet/pods/69e3deb6aaa7ca82dd236253a197e02b/volumes" Dec 03 14:03:05.808798 master-0 kubenswrapper[16176]: I1203 14:03:05.808740 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Dec 03 14:03:05.809494 master-0 kubenswrapper[16176]: I1203 14:03:05.809454 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerDied","Data":"ab5e3a3e803ad2fa9e552b5c448abfac370df50c48257b25a5dcf38408830685"} Dec 03 14:03:05.809567 master-0 kubenswrapper[16176]: I1203 14:03:05.809507 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerDied","Data":"2a09165acb6a766506a8fe7bf33d8e34418e6c5b87698b5a708a45feb615a317"} Dec 03 14:03:05.809923 master-0 kubenswrapper[16176]: I1203 14:03:05.809895 16176 scope.go:117] "RemoveContainer" containerID="2a09165acb6a766506a8fe7bf33d8e34418e6c5b87698b5a708a45feb615a317" Dec 03 14:03:05.809923 master-0 kubenswrapper[16176]: I1203 14:03:05.809921 16176 scope.go:117] "RemoveContainer" containerID="ab5e3a3e803ad2fa9e552b5c448abfac370df50c48257b25a5dcf38408830685" Dec 03 14:03:05.810385 master-0 kubenswrapper[16176]: I1203 14:03:05.810345 16176 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="0865064c1bd01843e6eb79589acf3b6fd2673d981fffa22a338dc8de926dc48d" exitCode=0 Dec 03 14:03:05.810436 master-0 kubenswrapper[16176]: I1203 14:03:05.810421 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerDied","Data":"0865064c1bd01843e6eb79589acf3b6fd2673d981fffa22a338dc8de926dc48d"} Dec 03 14:03:05.811086 master-0 kubenswrapper[16176]: I1203 14:03:05.811062 16176 scope.go:117] "RemoveContainer" containerID="0865064c1bd01843e6eb79589acf3b6fd2673d981fffa22a338dc8de926dc48d" Dec 03 14:03:05.825562 master-0 kubenswrapper[16176]: I1203 14:03:05.825514 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovn-acl-logging/0.log" Dec 03 14:03:05.826644 master-0 kubenswrapper[16176]: I1203 14:03:05.826615 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovn-controller/0.log" Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828488 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="23cec35f733927117a13c3db04a2902bbdba779ffa181a6493078d6d61e24067" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828535 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="210c6a8d2e386e655950675cf053111f6b97278ea90880c4d559e45206b5f80e" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828545 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="2eb1c0b87c5115a6c77880a0f6ea1d1e7c19c3d4b6adfbc9b213cb39d18f5119" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828557 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="682ed814488a1da09a97f46ae8065a12c156a25ca7d9ebf8ee99e80832d404f9" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828566 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="f9012451e143c661acf43d4a684e09fb51017c86e48f95ec5cedea2d66519495" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828574 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="9788dbc1822077a1345e0665b546240f6ae71123d0574d85f2a1bbad5b369d94" exitCode=0 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828582 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="511febd919fe51806b9e58c83ddbd24084a2ff41f70013d1f8cf1b73f8d1c121" exitCode=143 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828591 16176 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="9f5a54c31b58a99af81ae65e75ae6c435f6f05ae1f2ddaef3530aab147be46cc" exitCode=143 Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828619 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"23cec35f733927117a13c3db04a2902bbdba779ffa181a6493078d6d61e24067"} Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828749 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"210c6a8d2e386e655950675cf053111f6b97278ea90880c4d559e45206b5f80e"} Dec 03 14:03:05.829076 master-0 kubenswrapper[16176]: I1203 14:03:05.828776 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"2eb1c0b87c5115a6c77880a0f6ea1d1e7c19c3d4b6adfbc9b213cb39d18f5119"} Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829624 16176 scope.go:117] "RemoveContainer" containerID="9f5a54c31b58a99af81ae65e75ae6c435f6f05ae1f2ddaef3530aab147be46cc" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829641 16176 scope.go:117] "RemoveContainer" containerID="511febd919fe51806b9e58c83ddbd24084a2ff41f70013d1f8cf1b73f8d1c121" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829648 16176 scope.go:117] "RemoveContainer" containerID="9788dbc1822077a1345e0665b546240f6ae71123d0574d85f2a1bbad5b369d94" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829656 16176 scope.go:117] "RemoveContainer" containerID="f9012451e143c661acf43d4a684e09fb51017c86e48f95ec5cedea2d66519495" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829663 16176 scope.go:117] "RemoveContainer" containerID="682ed814488a1da09a97f46ae8065a12c156a25ca7d9ebf8ee99e80832d404f9" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829670 16176 scope.go:117] "RemoveContainer" containerID="2eb1c0b87c5115a6c77880a0f6ea1d1e7c19c3d4b6adfbc9b213cb39d18f5119" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829677 16176 scope.go:117] "RemoveContainer" containerID="210c6a8d2e386e655950675cf053111f6b97278ea90880c4d559e45206b5f80e" Dec 03 14:03:05.829760 master-0 kubenswrapper[16176]: I1203 14:03:05.829683 16176 scope.go:117] "RemoveContainer" containerID="23cec35f733927117a13c3db04a2902bbdba779ffa181a6493078d6d61e24067" Dec 03 14:03:05.830155 master-0 kubenswrapper[16176]: I1203 14:03:05.828788 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"682ed814488a1da09a97f46ae8065a12c156a25ca7d9ebf8ee99e80832d404f9"} Dec 03 14:03:05.830155 master-0 kubenswrapper[16176]: I1203 14:03:05.830150 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"f9012451e143c661acf43d4a684e09fb51017c86e48f95ec5cedea2d66519495"} Dec 03 14:03:05.830306 master-0 kubenswrapper[16176]: I1203 14:03:05.830165 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"9788dbc1822077a1345e0665b546240f6ae71123d0574d85f2a1bbad5b369d94"} Dec 03 14:03:05.830306 master-0 kubenswrapper[16176]: I1203 14:03:05.830176 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"511febd919fe51806b9e58c83ddbd24084a2ff41f70013d1f8cf1b73f8d1c121"} Dec 03 14:03:05.830306 master-0 kubenswrapper[16176]: I1203 14:03:05.830186 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"9f5a54c31b58a99af81ae65e75ae6c435f6f05ae1f2ddaef3530aab147be46cc"} Dec 03 14:03:05.832043 master-0 kubenswrapper[16176]: I1203 14:03:05.831950 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/1.log" Dec 03 14:03:05.832920 master-0 kubenswrapper[16176]: I1203 14:03:05.832885 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/0.log" Dec 03 14:03:05.832997 master-0 kubenswrapper[16176]: I1203 14:03:05.832966 16176 generic.go:334] "Generic (PLEG): container finished" podID="adbcce01-7282-4a75-843a-9623060346f0" containerID="594fb0126cf93faf50cc852686eaa0e96acf2a43e60f5721648d7a6bb2d3b91d" exitCode=1 Dec 03 14:03:05.833104 master-0 kubenswrapper[16176]: I1203 14:03:05.833069 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerDied","Data":"594fb0126cf93faf50cc852686eaa0e96acf2a43e60f5721648d7a6bb2d3b91d"} Dec 03 14:03:05.833833 master-0 kubenswrapper[16176]: I1203 14:03:05.833801 16176 scope.go:117] "RemoveContainer" containerID="594fb0126cf93faf50cc852686eaa0e96acf2a43e60f5721648d7a6bb2d3b91d" Dec 03 14:03:05.838491 master-0 kubenswrapper[16176]: I1203 14:03:05.838445 16176 generic.go:334] "Generic (PLEG): container finished" podID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerID="eb1d92aded35f4e70ee705b9c2fa75beb06f733923ba781b39acba9b70fd643f" exitCode=0 Dec 03 14:03:05.838565 master-0 kubenswrapper[16176]: I1203 14:03:05.838521 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerDied","Data":"eb1d92aded35f4e70ee705b9c2fa75beb06f733923ba781b39acba9b70fd643f"} Dec 03 14:03:05.839340 master-0 kubenswrapper[16176]: I1203 14:03:05.839225 16176 scope.go:117] "RemoveContainer" containerID="eb1d92aded35f4e70ee705b9c2fa75beb06f733923ba781b39acba9b70fd643f" Dec 03 14:03:05.844828 master-0 kubenswrapper[16176]: I1203 14:03:05.844779 16176 generic.go:334] "Generic (PLEG): container finished" podID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" containerID="25e7a24f654330de81677025ca04a819442a5e884c2ac0658b76adfc9af0ebbb" exitCode=0 Dec 03 14:03:05.844921 master-0 kubenswrapper[16176]: I1203 14:03:05.844870 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerDied","Data":"25e7a24f654330de81677025ca04a819442a5e884c2ac0658b76adfc9af0ebbb"} Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: I1203 14:03:05.845352 16176 scope.go:117] "RemoveContainer" containerID="25e7a24f654330de81677025ca04a819442a5e884c2ac0658b76adfc9af0ebbb" Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: E1203 14:03:05.846306 16176 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/events/console-c5d7cd7f9-2hp75.187db96841c03c89\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: &Event{ObjectMeta:{console-c5d7cd7f9-2hp75.187db96841c03c89 openshift-console 13724 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-console,Name:console-c5d7cd7f9-2hp75,UID:4dd1d142-6569-438d-b0c2-582aed44812d,APIVersion:v1,ResourceVersion:12730,FieldPath:spec.containers{console},},Reason:ProbeError,Message:Startup probe error: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: body: Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:01:26 +0000 UTC,LastTimestamp:2025-12-03 14:02:56.429290301 +0000 UTC m=+266.854930983,Count:10,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Dec 03 14:03:05.848082 master-0 kubenswrapper[16176]: > Dec 03 14:03:05.849397 master-0 kubenswrapper[16176]: I1203 14:03:05.849363 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/2.log" Dec 03 14:03:05.849838 master-0 kubenswrapper[16176]: I1203 14:03:05.849798 16176 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02" exitCode=0 Dec 03 14:03:05.849959 master-0 kubenswrapper[16176]: I1203 14:03:05.849870 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02"} Dec 03 14:03:05.852669 master-0 kubenswrapper[16176]: I1203 14:03:05.852600 16176 generic.go:334] "Generic (PLEG): container finished" podID="e97e1725-cb55-4ce3-952d-a4fd0731577d" containerID="ddbb768c864774f78204191462e3eed3712c04f6cc6d64ff756ae1b9f2a1eff5" exitCode=0 Dec 03 14:03:05.852669 master-0 kubenswrapper[16176]: I1203 14:03:05.852681 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerDied","Data":"ddbb768c864774f78204191462e3eed3712c04f6cc6d64ff756ae1b9f2a1eff5"} Dec 03 14:03:05.853369 master-0 kubenswrapper[16176]: I1203 14:03:05.853334 16176 scope.go:117] "RemoveContainer" containerID="ddbb768c864774f78204191462e3eed3712c04f6cc6d64ff756ae1b9f2a1eff5" Dec 03 14:03:05.856646 master-0 kubenswrapper[16176]: I1203 14:03:05.856613 16176 scope.go:117] "RemoveContainer" containerID="7a017ccfa4284a2f004536d19603cd66f22d12e3596ef52bb8973b7b88799d02" Dec 03 14:03:05.858869 master-0 kubenswrapper[16176]: I1203 14:03:05.858843 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072" exitCode=0 Dec 03 14:03:05.858968 master-0 kubenswrapper[16176]: I1203 14:03:05.858954 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23" exitCode=0 Dec 03 14:03:05.859037 master-0 kubenswrapper[16176]: I1203 14:03:05.859024 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6" exitCode=0 Dec 03 14:03:05.859112 master-0 kubenswrapper[16176]: I1203 14:03:05.859099 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0" exitCode=0 Dec 03 14:03:05.859176 master-0 kubenswrapper[16176]: I1203 14:03:05.859163 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab" exitCode=0 Dec 03 14:03:05.859271 master-0 kubenswrapper[16176]: I1203 14:03:05.858945 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072"} Dec 03 14:03:05.859329 master-0 kubenswrapper[16176]: I1203 14:03:05.859288 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23"} Dec 03 14:03:05.859329 master-0 kubenswrapper[16176]: I1203 14:03:05.859307 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6"} Dec 03 14:03:05.859329 master-0 kubenswrapper[16176]: I1203 14:03:05.859320 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0"} Dec 03 14:03:05.859421 master-0 kubenswrapper[16176]: I1203 14:03:05.859336 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab"} Dec 03 14:03:05.859922 master-0 kubenswrapper[16176]: I1203 14:03:05.859887 16176 scope.go:117] "RemoveContainer" containerID="e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a" Dec 03 14:03:05.859922 master-0 kubenswrapper[16176]: I1203 14:03:05.859919 16176 scope.go:117] "RemoveContainer" containerID="971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab" Dec 03 14:03:05.860004 master-0 kubenswrapper[16176]: I1203 14:03:05.859932 16176 scope.go:117] "RemoveContainer" containerID="074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0" Dec 03 14:03:05.860004 master-0 kubenswrapper[16176]: I1203 14:03:05.859946 16176 scope.go:117] "RemoveContainer" containerID="8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6" Dec 03 14:03:05.860004 master-0 kubenswrapper[16176]: I1203 14:03:05.859957 16176 scope.go:117] "RemoveContainer" containerID="84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23" Dec 03 14:03:05.860004 master-0 kubenswrapper[16176]: I1203 14:03:05.859966 16176 scope.go:117] "RemoveContainer" containerID="57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072" Dec 03 14:03:05.866689 master-0 kubenswrapper[16176]: I1203 14:03:05.866648 16176 generic.go:334] "Generic (PLEG): container finished" podID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" containerID="433b3fa5673e195032b56a487e1f362fc9d8cf480bbfba0ea3d9503f78f0235a" exitCode=0 Dec 03 14:03:05.866998 master-0 kubenswrapper[16176]: I1203 14:03:05.866972 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerDied","Data":"433b3fa5673e195032b56a487e1f362fc9d8cf480bbfba0ea3d9503f78f0235a"} Dec 03 14:03:05.871307 master-0 kubenswrapper[16176]: I1203 14:03:05.871223 16176 generic.go:334] "Generic (PLEG): container finished" podID="22673f47-9484-4eed-bbce-888588c754ed" containerID="4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b" exitCode=0 Dec 03 14:03:05.871402 master-0 kubenswrapper[16176]: I1203 14:03:05.871324 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerDied","Data":"4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b"} Dec 03 14:03:05.872666 master-0 kubenswrapper[16176]: I1203 14:03:05.872638 16176 scope.go:117] "RemoveContainer" containerID="433b3fa5673e195032b56a487e1f362fc9d8cf480bbfba0ea3d9503f78f0235a" Dec 03 14:03:05.873768 master-0 kubenswrapper[16176]: I1203 14:03:05.873668 16176 scope.go:117] "RemoveContainer" containerID="4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b" Dec 03 14:03:05.877607 master-0 kubenswrapper[16176]: I1203 14:03:05.877531 16176 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22" exitCode=0 Dec 03 14:03:05.877607 master-0 kubenswrapper[16176]: I1203 14:03:05.877603 16176 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c" exitCode=0 Dec 03 14:03:05.877725 master-0 kubenswrapper[16176]: I1203 14:03:05.877654 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22"} Dec 03 14:03:05.877725 master-0 kubenswrapper[16176]: I1203 14:03:05.877674 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c"} Dec 03 14:03:05.877970 master-0 kubenswrapper[16176]: I1203 14:03:05.877946 16176 scope.go:117] "RemoveContainer" containerID="b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22" Dec 03 14:03:05.877970 master-0 kubenswrapper[16176]: I1203 14:03:05.877968 16176 scope.go:117] "RemoveContainer" containerID="28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c" Dec 03 14:03:05.881836 master-0 kubenswrapper[16176]: I1203 14:03:05.881795 16176 generic.go:334] "Generic (PLEG): container finished" podID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" containerID="286e240000163980fd7b266d4511288b3506ec5bbae38b7d39eb26613d430cda" exitCode=0 Dec 03 14:03:05.881919 master-0 kubenswrapper[16176]: I1203 14:03:05.881861 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerDied","Data":"286e240000163980fd7b266d4511288b3506ec5bbae38b7d39eb26613d430cda"} Dec 03 14:03:05.882199 master-0 kubenswrapper[16176]: I1203 14:03:05.882170 16176 scope.go:117] "RemoveContainer" containerID="286e240000163980fd7b266d4511288b3506ec5bbae38b7d39eb26613d430cda" Dec 03 14:03:05.885542 master-0 kubenswrapper[16176]: I1203 14:03:05.885512 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/1.log" Dec 03 14:03:05.885619 master-0 kubenswrapper[16176]: I1203 14:03:05.885554 16176 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="668d20dac70369b169c73554528623b6d50616a63ade734b8e38b044fe1f5b5c" exitCode=0 Dec 03 14:03:05.885619 master-0 kubenswrapper[16176]: I1203 14:03:05.885572 16176 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="db27dcf8d44a2c7f1842719b86cb23a142abec21f5f241b9c57e46c810dc5d5e" exitCode=0 Dec 03 14:03:05.885725 master-0 kubenswrapper[16176]: I1203 14:03:05.885626 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"668d20dac70369b169c73554528623b6d50616a63ade734b8e38b044fe1f5b5c"} Dec 03 14:03:05.885725 master-0 kubenswrapper[16176]: I1203 14:03:05.885652 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"db27dcf8d44a2c7f1842719b86cb23a142abec21f5f241b9c57e46c810dc5d5e"} Dec 03 14:03:05.886044 master-0 kubenswrapper[16176]: I1203 14:03:05.886009 16176 scope.go:117] "RemoveContainer" containerID="668d20dac70369b169c73554528623b6d50616a63ade734b8e38b044fe1f5b5c" Dec 03 14:03:05.886044 master-0 kubenswrapper[16176]: I1203 14:03:05.886037 16176 scope.go:117] "RemoveContainer" containerID="db27dcf8d44a2c7f1842719b86cb23a142abec21f5f241b9c57e46c810dc5d5e" Dec 03 14:03:05.887271 master-0 kubenswrapper[16176]: I1203 14:03:05.887231 16176 generic.go:334] "Generic (PLEG): container finished" podID="6935a3f8-723e-46e6-8498-483f34bf0825" containerID="3a78cb4cf7263b7ff727b4984d0ab0b818adeb40c92408bba87e262eea4f142e" exitCode=0 Dec 03 14:03:05.887338 master-0 kubenswrapper[16176]: I1203 14:03:05.887272 16176 generic.go:334] "Generic (PLEG): container finished" podID="6935a3f8-723e-46e6-8498-483f34bf0825" containerID="d91c321c57f44509a53798341a72ef3d6374d56a8925ee7e904aa8675b73f42d" exitCode=0 Dec 03 14:03:05.887338 master-0 kubenswrapper[16176]: I1203 14:03:05.887314 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerDied","Data":"3a78cb4cf7263b7ff727b4984d0ab0b818adeb40c92408bba87e262eea4f142e"} Dec 03 14:03:05.887338 master-0 kubenswrapper[16176]: I1203 14:03:05.887335 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerDied","Data":"d91c321c57f44509a53798341a72ef3d6374d56a8925ee7e904aa8675b73f42d"} Dec 03 14:03:05.887704 master-0 kubenswrapper[16176]: I1203 14:03:05.887671 16176 scope.go:117] "RemoveContainer" containerID="d91c321c57f44509a53798341a72ef3d6374d56a8925ee7e904aa8675b73f42d" Dec 03 14:03:05.887704 master-0 kubenswrapper[16176]: I1203 14:03:05.887696 16176 scope.go:117] "RemoveContainer" containerID="3a78cb4cf7263b7ff727b4984d0ab0b818adeb40c92408bba87e262eea4f142e" Dec 03 14:03:05.904369 master-0 kubenswrapper[16176]: I1203 14:03:05.901656 16176 generic.go:334] "Generic (PLEG): container finished" podID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" containerID="5e1a3335e1e7a01c650176f41fdc79b673ea09bcb04bb8a4c229686c62ac84e7" exitCode=0 Dec 03 14:03:05.904369 master-0 kubenswrapper[16176]: I1203 14:03:05.901750 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerDied","Data":"5e1a3335e1e7a01c650176f41fdc79b673ea09bcb04bb8a4c229686c62ac84e7"} Dec 03 14:03:05.904369 master-0 kubenswrapper[16176]: I1203 14:03:05.902291 16176 scope.go:117] "RemoveContainer" containerID="5e1a3335e1e7a01c650176f41fdc79b673ea09bcb04bb8a4c229686c62ac84e7" Dec 03 14:03:05.914211 master-0 kubenswrapper[16176]: I1203 14:03:05.914172 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-678c7f799b-4b7nv_1ba502ba-1179-478e-b4b9-f3409320b0ad/route-controller-manager/0.log" Dec 03 14:03:05.914298 master-0 kubenswrapper[16176]: I1203 14:03:05.914235 16176 generic.go:334] "Generic (PLEG): container finished" podID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerID="aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b" exitCode=255 Dec 03 14:03:05.914410 master-0 kubenswrapper[16176]: I1203 14:03:05.914356 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerDied","Data":"aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b"} Dec 03 14:03:05.915149 master-0 kubenswrapper[16176]: I1203 14:03:05.915105 16176 scope.go:117] "RemoveContainer" containerID="aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b" Dec 03 14:03:05.918808 master-0 kubenswrapper[16176]: I1203 14:03:05.918781 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_69e3deb6aaa7ca82dd236253a197e02b/kube-apiserver-cert-syncer/0.log" Dec 03 14:03:05.922802 master-0 kubenswrapper[16176]: I1203 14:03:05.922768 16176 generic.go:334] "Generic (PLEG): container finished" podID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerID="69c5b46bb9b1e5d26666667f963bb2655152b9986371a4e0b57ae312b0389515" exitCode=0 Dec 03 14:03:05.922872 master-0 kubenswrapper[16176]: I1203 14:03:05.922829 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerDied","Data":"69c5b46bb9b1e5d26666667f963bb2655152b9986371a4e0b57ae312b0389515"} Dec 03 14:03:05.923556 master-0 kubenswrapper[16176]: I1203 14:03:05.923531 16176 scope.go:117] "RemoveContainer" containerID="69c5b46bb9b1e5d26666667f963bb2655152b9986371a4e0b57ae312b0389515" Dec 03 14:03:05.927566 master-0 kubenswrapper[16176]: I1203 14:03:05.927491 16176 generic.go:334] "Generic (PLEG): container finished" podID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" containerID="dead9648b6db50ab9ffadeb0ded4ac60b4b62fa9651afaff45090595f1cc6b7d" exitCode=0 Dec 03 14:03:05.927635 master-0 kubenswrapper[16176]: I1203 14:03:05.927595 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerDied","Data":"dead9648b6db50ab9ffadeb0ded4ac60b4b62fa9651afaff45090595f1cc6b7d"} Dec 03 14:03:05.927944 master-0 kubenswrapper[16176]: I1203 14:03:05.927921 16176 scope.go:117] "RemoveContainer" containerID="dead9648b6db50ab9ffadeb0ded4ac60b4b62fa9651afaff45090595f1cc6b7d" Dec 03 14:03:05.930172 master-0 kubenswrapper[16176]: I1203 14:03:05.930126 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78d987764b-xcs5w_d3200abb-a440-44db-8897-79c809c1d838/controller-manager/0.log" Dec 03 14:03:05.930234 master-0 kubenswrapper[16176]: I1203 14:03:05.930185 16176 generic.go:334] "Generic (PLEG): container finished" podID="d3200abb-a440-44db-8897-79c809c1d838" containerID="c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7" exitCode=0 Dec 03 14:03:05.930331 master-0 kubenswrapper[16176]: I1203 14:03:05.930286 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerDied","Data":"c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7"} Dec 03 14:03:05.931369 master-0 kubenswrapper[16176]: I1203 14:03:05.931326 16176 scope.go:117] "RemoveContainer" containerID="c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7" Dec 03 14:03:05.931861 master-0 kubenswrapper[16176]: E1203 14:03:05.931809 16176 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-78d987764b-xcs5w_openshift-controller-manager(d3200abb-a440-44db-8897-79c809c1d838)\"" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:03:05.932105 master-0 kubenswrapper[16176]: I1203 14:03:05.932081 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-rev/0.log" Dec 03 14:03:05.932887 master-0 kubenswrapper[16176]: I1203 14:03:05.932858 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-metrics/0.log" Dec 03 14:03:05.934238 master-0 kubenswrapper[16176]: I1203 14:03:05.934214 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b" exitCode=2 Dec 03 14:03:05.934238 master-0 kubenswrapper[16176]: I1203 14:03:05.934239 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1" exitCode=0 Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934248 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20" exitCode=2 Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934293 16176 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768" exitCode=0 Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934340 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b"} Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934360 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1"} Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934371 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20"} Dec 03 14:03:05.934382 master-0 kubenswrapper[16176]: I1203 14:03:05.934380 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768"} Dec 03 14:03:05.935316 master-0 kubenswrapper[16176]: I1203 14:03:05.935275 16176 scope.go:117] "RemoveContainer" containerID="d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768" Dec 03 14:03:05.935388 master-0 kubenswrapper[16176]: I1203 14:03:05.935352 16176 scope.go:117] "RemoveContainer" containerID="04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20" Dec 03 14:03:05.935388 master-0 kubenswrapper[16176]: I1203 14:03:05.935364 16176 scope.go:117] "RemoveContainer" containerID="7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1" Dec 03 14:03:05.935388 master-0 kubenswrapper[16176]: I1203 14:03:05.935372 16176 scope.go:117] "RemoveContainer" containerID="8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b" Dec 03 14:03:05.936708 master-0 kubenswrapper[16176]: I1203 14:03:05.936680 16176 generic.go:334] "Generic (PLEG): container finished" podID="4df2889c-99f7-402a-9d50-18ccf427179c" containerID="dcbe9987f77ff713c092b1bf8411528eede8d9b0b5e7047282320f1ad985745a" exitCode=0 Dec 03 14:03:05.936785 master-0 kubenswrapper[16176]: I1203 14:03:05.936708 16176 generic.go:334] "Generic (PLEG): container finished" podID="4df2889c-99f7-402a-9d50-18ccf427179c" containerID="9d9885192efaa088fe29d3228dd2d4225298754ffb0326c83d203b3ded8fe9b1" exitCode=0 Dec 03 14:03:05.936830 master-0 kubenswrapper[16176]: I1203 14:03:05.936778 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerDied","Data":"dcbe9987f77ff713c092b1bf8411528eede8d9b0b5e7047282320f1ad985745a"} Dec 03 14:03:05.936830 master-0 kubenswrapper[16176]: I1203 14:03:05.936817 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerDied","Data":"9d9885192efaa088fe29d3228dd2d4225298754ffb0326c83d203b3ded8fe9b1"} Dec 03 14:03:05.937635 master-0 kubenswrapper[16176]: I1203 14:03:05.937474 16176 scope.go:117] "RemoveContainer" containerID="9d9885192efaa088fe29d3228dd2d4225298754ffb0326c83d203b3ded8fe9b1" Dec 03 14:03:05.937635 master-0 kubenswrapper[16176]: I1203 14:03:05.937500 16176 scope.go:117] "RemoveContainer" containerID="dcbe9987f77ff713c092b1bf8411528eede8d9b0b5e7047282320f1ad985745a" Dec 03 14:03:05.940912 master-0 kubenswrapper[16176]: I1203 14:03:05.940835 16176 status_manager.go:907] "Failed to delete status for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5c8b4c9687-4pxw5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.941853 master-0 kubenswrapper[16176]: I1203 14:03:05.941754 16176 status_manager.go:851] "Failed to get status for pod" podUID="69e3deb6aaa7ca82dd236253a197e02b" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.944107 master-0 kubenswrapper[16176]: I1203 14:03:05.944061 16176 generic.go:334] "Generic (PLEG): container finished" podID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" containerID="022c984796ffbc81ed2de2d84261fe1bf89204572d6040b93e26ccb33c39afb7" exitCode=0 Dec 03 14:03:05.944374 master-0 kubenswrapper[16176]: I1203 14:03:05.944134 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerDied","Data":"022c984796ffbc81ed2de2d84261fe1bf89204572d6040b93e26ccb33c39afb7"} Dec 03 14:03:05.944445 master-0 kubenswrapper[16176]: I1203 14:03:05.944197 16176 status_manager.go:851] "Failed to get status for pod" podUID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" pod="openshift-dns/node-resolver-4xlhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-4xlhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.945038 master-0 kubenswrapper[16176]: I1203 14:03:05.944973 16176 status_manager.go:851] "Failed to get status for pod" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-cb84b9cdf-qn94w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.945109 master-0 kubenswrapper[16176]: I1203 14:03:05.945056 16176 scope.go:117] "RemoveContainer" containerID="022c984796ffbc81ed2de2d84261fe1bf89204572d6040b93e26ccb33c39afb7" Dec 03 14:03:05.945651 master-0 kubenswrapper[16176]: I1203 14:03:05.945601 16176 status_manager.go:851] "Failed to get status for pod" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-5c8b4c9687-4pxw5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.946200 master-0 kubenswrapper[16176]: I1203 14:03:05.946155 16176 status_manager.go:851] "Failed to get status for pod" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-6b7bcd6566-jh9m8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.946777 master-0 kubenswrapper[16176]: I1203 14:03:05.946737 16176 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.947337 master-0 kubenswrapper[16176]: I1203 14:03:05.947297 16176 status_manager.go:851] "Failed to get status for pod" podUID="c98a8d85d3901d33f6fe192bdc7172aa" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.948036 master-0 kubenswrapper[16176]: I1203 14:03:05.947981 16176 status_manager.go:851] "Failed to get status for pod" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-86897dd478-qqwh7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.948742 master-0 kubenswrapper[16176]: I1203 14:03:05.948651 16176 status_manager.go:851] "Failed to get status for pod" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" pod="openshift-ingress-canary/ingress-canary-vkpv4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-vkpv4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.949306 master-0 kubenswrapper[16176]: I1203 14:03:05.949250 16176 status_manager.go:851] "Failed to get status for pod" podUID="b495b0c38f2c54e7cc46282c5f92aab5" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.949980 master-0 kubenswrapper[16176]: I1203 14:03:05.949921 16176 status_manager.go:851] "Failed to get status for pod" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-6964bb78b7-g4lv2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.951191 master-0 kubenswrapper[16176]: I1203 14:03:05.951146 16176 status_manager.go:851] "Failed to get status for pod" podUID="d7d6a05e-beee-40e9-b376-5c22e285b27a" pod="openshift-image-registry/node-ca-4p4zh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-4p4zh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.952289 master-0 kubenswrapper[16176]: I1203 14:03:05.952200 16176 status_manager.go:851] "Failed to get status for pod" podUID="d3200abb-a440-44db-8897-79c809c1d838" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78d987764b-xcs5w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.952853 master-0 kubenswrapper[16176]: I1203 14:03:05.952822 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-77df56447c-vsrxx_a8dc6511-7339-4269-9d43-14ce53bb4e7f/console-operator/0.log" Dec 03 14:03:05.952906 master-0 kubenswrapper[16176]: I1203 14:03:05.952864 16176 generic.go:334] "Generic (PLEG): container finished" podID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" containerID="d4e51eb8e51007bc54001295feff7c232e1b03f10ddff2ae3a464ceef9c2aa28" exitCode=1 Dec 03 14:03:05.952906 master-0 kubenswrapper[16176]: I1203 14:03:05.952894 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerDied","Data":"d4e51eb8e51007bc54001295feff7c232e1b03f10ddff2ae3a464ceef9c2aa28"} Dec 03 14:03:05.953027 master-0 kubenswrapper[16176]: I1203 14:03:05.952973 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-76bd5d69c7-fjrrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.953349 master-0 kubenswrapper[16176]: I1203 14:03:05.953326 16176 scope.go:117] "RemoveContainer" containerID="d4e51eb8e51007bc54001295feff7c232e1b03f10ddff2ae3a464ceef9c2aa28" Dec 03 14:03:05.953687 master-0 kubenswrapper[16176]: I1203 14:03:05.953647 16176 status_manager.go:851] "Failed to get status for pod" podUID="7bce50c457ac1f4721bc81a570dd238a" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.954223 master-0 kubenswrapper[16176]: I1203 14:03:05.954177 16176 status_manager.go:851] "Failed to get status for pod" podUID="ebf07eb54db570834b7c9a90b6b07403" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.954798 master-0 kubenswrapper[16176]: I1203 14:03:05.954741 16176 status_manager.go:851] "Failed to get status for pod" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" pod="openshift-marketplace/redhat-marketplace-ddwmn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ddwmn\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.955582 master-0 kubenswrapper[16176]: I1203 14:03:05.955538 16176 status_manager.go:851] "Failed to get status for pod" podUID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" pod="openshift-monitoring/node-exporter-b62gf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/node-exporter-b62gf\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.956152 master-0 kubenswrapper[16176]: I1203 14:03:05.956111 16176 status_manager.go:851] "Failed to get status for pod" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-56f5898f45-fhnc5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.956794 master-0 kubenswrapper[16176]: I1203 14:03:05.956753 16176 status_manager.go:851] "Failed to get status for pod" podUID="19c2a40b-213c-42f1-9459-87c2e780a75f" pod="openshift-multus/multus-additional-cni-plugins-42hmk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-42hmk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.957350 master-0 kubenswrapper[16176]: I1203 14:03:05.957300 16176 status_manager.go:851] "Failed to get status for pod" podUID="911f6333-cdb0-425c-b79b-f892444b7097" pod="openshift-marketplace/redhat-operators-6z4sc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6z4sc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.957940 master-0 kubenswrapper[16176]: I1203 14:03:05.957878 16176 status_manager.go:851] "Failed to get status for pod" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.958529 master-0 kubenswrapper[16176]: I1203 14:03:05.958463 16176 status_manager.go:851] "Failed to get status for pod" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-7b795784b8-44frm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.959053 master-0 kubenswrapper[16176]: I1203 14:03:05.959019 16176 status_manager.go:851] "Failed to get status for pod" podUID="1c562495-1290-4792-b4b2-639faa594ae2" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-667484ff5-n7qz8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.959379 master-0 kubenswrapper[16176]: I1203 14:03:05.959349 16176 status_manager.go:851] "Failed to get status for pod" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-57cbc648f8-q4cgg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.960212 master-0 kubenswrapper[16176]: I1203 14:03:05.960143 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b681889-eb2c-41fb-a1dc-69b99227b45b" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.960723 master-0 kubenswrapper[16176]: I1203 14:03:05.960676 16176 status_manager.go:851] "Failed to get status for pod" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-cc996c4bd-j4hzr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.961252 master-0 kubenswrapper[16176]: I1203 14:03:05.961221 16176 status_manager.go:851] "Failed to get status for pod" podUID="52100521-67e9-40c9-887c-eda6560f06e0" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-7978bf889c-n64v4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.961823 master-0 kubenswrapper[16176]: I1203 14:03:05.961784 16176 status_manager.go:851] "Failed to get status for pod" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-5f78c89466-bshxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.962301 master-0 kubenswrapper[16176]: I1203 14:03:05.962273 16176 status_manager.go:851] "Failed to get status for pod" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-6985f84b49-v9vlg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.962733 master-0 kubenswrapper[16176]: I1203 14:03:05.962707 16176 status_manager.go:851] "Failed to get status for pod" podUID="22673f47-9484-4eed-bbce-888588c754ed" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5bdcc987c4-x99xc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.963171 master-0 kubenswrapper[16176]: I1203 14:03:05.963118 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" pod="openshift-multus/network-metrics-daemon-ch7xd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-ch7xd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.963939 master-0 kubenswrapper[16176]: I1203 14:03:05.963897 16176 status_manager.go:851] "Failed to get status for pod" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-547cc9cc49-kqs4k\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.964584 master-0 kubenswrapper[16176]: I1203 14:03:05.964545 16176 status_manager.go:851] "Failed to get status for pod" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.964923 master-0 kubenswrapper[16176]: I1203 14:03:05.964896 16176 status_manager.go:851] "Failed to get status for pod" podUID="0b1e0884-ff54-419b-90d3-25f561a6391d" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.965229 master-0 kubenswrapper[16176]: I1203 14:03:05.965203 16176 status_manager.go:851] "Failed to get status for pod" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.965567 master-0 kubenswrapper[16176]: I1203 14:03:05.965541 16176 status_manager.go:851] "Failed to get status for pod" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" pod="openshift-console/downloads-6f5db8559b-96ljh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-6f5db8559b-96ljh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.965983 master-0 kubenswrapper[16176]: I1203 14:03:05.965946 16176 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.966333 master-0 kubenswrapper[16176]: I1203 14:03:05.966302 16176 status_manager.go:851] "Failed to get status for pod" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7dcc7f9bd6-68wml\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.966839 master-0 kubenswrapper[16176]: I1203 14:03:05.966799 16176 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.967242 master-0 kubenswrapper[16176]: I1203 14:03:05.967207 16176 status_manager.go:851] "Failed to get status for pod" podUID="77430348-b53a-4898-8047-be8bb542a0a7" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-txl6b\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.967717 master-0 kubenswrapper[16176]: I1203 14:03:05.967675 16176 status_manager.go:851] "Failed to get status for pod" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-664c9d94c9-9vfr4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.968175 master-0 kubenswrapper[16176]: I1203 14:03:05.968134 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.968617 master-0 kubenswrapper[16176]: I1203 14:03:05.968579 16176 status_manager.go:851] "Failed to get status for pod" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-565bdcb8-477pk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.969079 master-0 kubenswrapper[16176]: I1203 14:03:05.969037 16176 status_manager.go:851] "Failed to get status for pod" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-75b4d49d4c-h599p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.969508 master-0 kubenswrapper[16176]: I1203 14:03:05.969472 16176 status_manager.go:851] "Failed to get status for pod" podUID="da583723-b3ad-4a6f-b586-09b739bd7f8c" pod="openshift-network-node-identity/network-node-identity-c8csx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-c8csx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.970084 master-0 kubenswrapper[16176]: I1203 14:03:05.970048 16176 status_manager.go:851] "Failed to get status for pod" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-754cfd84-qf898\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.970504 master-0 kubenswrapper[16176]: I1203 14:03:05.970474 16176 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.970942 master-0 kubenswrapper[16176]: I1203 14:03:05.970919 16176 status_manager.go:851] "Failed to get status for pod" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7c64dd9d8b-49skr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.971504 master-0 kubenswrapper[16176]: I1203 14:03:05.971470 16176 status_manager.go:851] "Failed to get status for pod" podUID="0535e784-8e28-4090-aa2e-df937910767c" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-7479ffdf48-hpdzl\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.971924 master-0 kubenswrapper[16176]: I1203 14:03:05.971890 16176 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.972280 master-0 kubenswrapper[16176]: I1203 14:03:05.972234 16176 status_manager.go:851] "Failed to get status for pod" podUID="6935a3f8-723e-46e6-8498-483f34bf0825" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-f9f7f4946-48mrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.972717 master-0 kubenswrapper[16176]: I1203 14:03:05.972682 16176 status_manager.go:851] "Failed to get status for pod" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5fdc576499-j2n8j\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.973074 master-0 kubenswrapper[16176]: I1203 14:03:05.973046 16176 status_manager.go:851] "Failed to get status for pod" podUID="eecc43f5-708f-4395-98cc-696b243d6321" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-pvrfs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.973410 master-0 kubenswrapper[16176]: I1203 14:03:05.973383 16176 status_manager.go:851] "Failed to get status for pod" podUID="adbcce01-7282-4a75-843a-9623060346f0" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7c4697b5f5-9f69p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.973771 master-0 kubenswrapper[16176]: I1203 14:03:05.973742 16176 status_manager.go:851] "Failed to get status for pod" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-85dbd94574-8jfp5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.974122 master-0 kubenswrapper[16176]: I1203 14:03:05.974095 16176 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.974623 master-0 kubenswrapper[16176]: I1203 14:03:05.974593 16176 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.974959 master-0 kubenswrapper[16176]: I1203 14:03:05.974931 16176 status_manager.go:851] "Failed to get status for pod" podUID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" pod="openshift-network-operator/iptables-alerter-n24qb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-n24qb\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.975479 master-0 kubenswrapper[16176]: I1203 14:03:05.975307 16176 status_manager.go:851] "Failed to get status for pod" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7cf5cf757f-zgm6l\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.975710 master-0 kubenswrapper[16176]: I1203 14:03:05.975677 16176 status_manager.go:851] "Failed to get status for pod" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-69cc794c58-mfjk2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.976122 master-0 kubenswrapper[16176]: I1203 14:03:05.976085 16176 status_manager.go:851] "Failed to get status for pod" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6cbf58c977-8lh6n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.976540 master-0 kubenswrapper[16176]: I1203 14:03:05.976505 16176 status_manager.go:851] "Failed to get status for pod" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" pod="openshift-network-diagnostics/network-check-target-pcchm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-pcchm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.977179 master-0 kubenswrapper[16176]: I1203 14:03:05.977142 16176 status_manager.go:851] "Failed to get status for pod" podUID="fd2fa610bb2a39c39fcdd00db03a511a" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.978412 master-0 kubenswrapper[16176]: I1203 14:03:05.978375 16176 status_manager.go:851] "Failed to get status for pod" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-678c7f799b-4b7nv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.979407 master-0 kubenswrapper[16176]: I1203 14:03:05.978841 16176 status_manager.go:851] "Failed to get status for pod" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" pod="openshift-console/console-c5d7cd7f9-2hp75" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-c5d7cd7f9-2hp75\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.979407 master-0 kubenswrapper[16176]: I1203 14:03:05.979224 16176 status_manager.go:851] "Failed to get status for pod" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" pod="openshift-console/console-648d88c756-vswh8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-648d88c756-vswh8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.980025 master-0 kubenswrapper[16176]: I1203 14:03:05.979985 16176 status_manager.go:851] "Failed to get status for pod" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" pod="openshift-dns/dns-default-5m4f8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-5m4f8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.980605 master-0 kubenswrapper[16176]: I1203 14:03:05.980580 16176 status_manager.go:851] "Failed to get status for pod" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-74cddd4fb5-phk6r\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.981185 master-0 kubenswrapper[16176]: I1203 14:03:05.981126 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-747bdb58b5-mn76f\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.981912 master-0 kubenswrapper[16176]: I1203 14:03:05.981865 16176 status_manager.go:851] "Failed to get status for pod" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-56f5898f45-fhnc5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.982290 master-0 kubenswrapper[16176]: I1203 14:03:05.982246 16176 status_manager.go:851] "Failed to get status for pod" podUID="19c2a40b-213c-42f1-9459-87c2e780a75f" pod="openshift-multus/multus-additional-cni-plugins-42hmk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-42hmk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.982605 master-0 kubenswrapper[16176]: I1203 14:03:05.982578 16176 status_manager.go:851] "Failed to get status for pod" podUID="911f6333-cdb0-425c-b79b-f892444b7097" pod="openshift-marketplace/redhat-operators-6z4sc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6z4sc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.982938 master-0 kubenswrapper[16176]: I1203 14:03:05.982905 16176 status_manager.go:851] "Failed to get status for pod" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.983285 master-0 kubenswrapper[16176]: I1203 14:03:05.983242 16176 status_manager.go:851] "Failed to get status for pod" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-7b795784b8-44frm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.984671 master-0 kubenswrapper[16176]: I1203 14:03:05.984635 16176 status_manager.go:851] "Failed to get status for pod" podUID="1c562495-1290-4792-b4b2-639faa594ae2" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-667484ff5-n7qz8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.985054 master-0 kubenswrapper[16176]: I1203 14:03:05.985016 16176 status_manager.go:851] "Failed to get status for pod" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-57cbc648f8-q4cgg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.985422 master-0 kubenswrapper[16176]: I1203 14:03:05.985392 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b681889-eb2c-41fb-a1dc-69b99227b45b" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.985764 master-0 kubenswrapper[16176]: I1203 14:03:05.985734 16176 status_manager.go:851] "Failed to get status for pod" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-cc996c4bd-j4hzr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.986102 master-0 kubenswrapper[16176]: I1203 14:03:05.986070 16176 status_manager.go:851] "Failed to get status for pod" podUID="52100521-67e9-40c9-887c-eda6560f06e0" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-7978bf889c-n64v4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.986453 master-0 kubenswrapper[16176]: I1203 14:03:05.986422 16176 status_manager.go:851] "Failed to get status for pod" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-5f78c89466-bshxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.986798 master-0 kubenswrapper[16176]: I1203 14:03:05.986769 16176 status_manager.go:851] "Failed to get status for pod" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-6985f84b49-v9vlg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.987139 master-0 kubenswrapper[16176]: I1203 14:03:05.987111 16176 status_manager.go:851] "Failed to get status for pod" podUID="22673f47-9484-4eed-bbce-888588c754ed" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5bdcc987c4-x99xc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.987857 master-0 kubenswrapper[16176]: I1203 14:03:05.987828 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" pod="openshift-multus/network-metrics-daemon-ch7xd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-ch7xd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.988221 master-0 kubenswrapper[16176]: I1203 14:03:05.988192 16176 status_manager.go:851] "Failed to get status for pod" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-547cc9cc49-kqs4k\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.988598 master-0 kubenswrapper[16176]: I1203 14:03:05.988569 16176 status_manager.go:851] "Failed to get status for pod" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.988926 master-0 kubenswrapper[16176]: I1203 14:03:05.988897 16176 status_manager.go:851] "Failed to get status for pod" podUID="0b1e0884-ff54-419b-90d3-25f561a6391d" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.989445 master-0 kubenswrapper[16176]: I1203 14:03:05.989416 16176 status_manager.go:851] "Failed to get status for pod" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.989975 master-0 kubenswrapper[16176]: I1203 14:03:05.989917 16176 status_manager.go:851] "Failed to get status for pod" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" pod="openshift-console/downloads-6f5db8559b-96ljh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-6f5db8559b-96ljh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.990548 master-0 kubenswrapper[16176]: I1203 14:03:05.990510 16176 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.990933 master-0 kubenswrapper[16176]: I1203 14:03:05.990895 16176 status_manager.go:851] "Failed to get status for pod" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7dcc7f9bd6-68wml\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.991489 master-0 kubenswrapper[16176]: I1203 14:03:05.991245 16176 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.991928 master-0 kubenswrapper[16176]: I1203 14:03:05.991887 16176 status_manager.go:851] "Failed to get status for pod" podUID="77430348-b53a-4898-8047-be8bb542a0a7" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-txl6b\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.992344 master-0 kubenswrapper[16176]: I1203 14:03:05.992314 16176 status_manager.go:851] "Failed to get status for pod" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-664c9d94c9-9vfr4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.993144 master-0 kubenswrapper[16176]: I1203 14:03:05.993063 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.993845 master-0 kubenswrapper[16176]: I1203 14:03:05.993800 16176 status_manager.go:851] "Failed to get status for pod" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-565bdcb8-477pk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.994477 master-0 kubenswrapper[16176]: I1203 14:03:05.994430 16176 status_manager.go:851] "Failed to get status for pod" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-75b4d49d4c-h599p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.995044 master-0 kubenswrapper[16176]: I1203 14:03:05.995010 16176 status_manager.go:851] "Failed to get status for pod" podUID="da583723-b3ad-4a6f-b586-09b739bd7f8c" pod="openshift-network-node-identity/network-node-identity-c8csx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-c8csx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.995738 master-0 kubenswrapper[16176]: I1203 14:03:05.995714 16176 status_manager.go:851] "Failed to get status for pod" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-754cfd84-qf898\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.996440 master-0 kubenswrapper[16176]: I1203 14:03:05.996376 16176 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:05.997176 master-0 kubenswrapper[16176]: I1203 14:03:05.997124 16176 status_manager.go:851] "Failed to get status for pod" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7c64dd9d8b-49skr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.000210 master-0 kubenswrapper[16176]: I1203 14:03:06.000169 16176 status_manager.go:851] "Failed to get status for pod" podUID="0535e784-8e28-4090-aa2e-df937910767c" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-7479ffdf48-hpdzl\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.020055 master-0 kubenswrapper[16176]: I1203 14:03:06.019997 16176 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.039446 master-0 kubenswrapper[16176]: I1203 14:03:06.039316 16176 status_manager.go:851] "Failed to get status for pod" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-77df56447c-vsrxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.060703 master-0 kubenswrapper[16176]: I1203 14:03:06.060644 16176 status_manager.go:851] "Failed to get status for pod" podUID="6935a3f8-723e-46e6-8498-483f34bf0825" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-f9f7f4946-48mrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.079531 master-0 kubenswrapper[16176]: I1203 14:03:06.079402 16176 status_manager.go:851] "Failed to get status for pod" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5fdc576499-j2n8j\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.100225 master-0 kubenswrapper[16176]: I1203 14:03:06.100152 16176 status_manager.go:851] "Failed to get status for pod" podUID="eecc43f5-708f-4395-98cc-696b243d6321" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-pvrfs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.119563 master-0 kubenswrapper[16176]: I1203 14:03:06.119503 16176 status_manager.go:851] "Failed to get status for pod" podUID="adbcce01-7282-4a75-843a-9623060346f0" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7c4697b5f5-9f69p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.139715 master-0 kubenswrapper[16176]: I1203 14:03:06.139643 16176 status_manager.go:851] "Failed to get status for pod" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-85dbd94574-8jfp5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.159964 master-0 kubenswrapper[16176]: I1203 14:03:06.159897 16176 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.182396 master-0 kubenswrapper[16176]: I1203 14:03:06.180165 16176 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.200208 master-0 kubenswrapper[16176]: I1203 14:03:06.200134 16176 status_manager.go:851] "Failed to get status for pod" podUID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" pod="openshift-network-operator/iptables-alerter-n24qb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-n24qb\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.220364 master-0 kubenswrapper[16176]: I1203 14:03:06.219978 16176 status_manager.go:851] "Failed to get status for pod" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7cf5cf757f-zgm6l\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.240043 master-0 kubenswrapper[16176]: I1203 14:03:06.239951 16176 status_manager.go:851] "Failed to get status for pod" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-69cc794c58-mfjk2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.272522 master-0 kubenswrapper[16176]: I1203 14:03:06.270974 16176 status_manager.go:851] "Failed to get status for pod" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6cbf58c977-8lh6n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.278030 master-0 kubenswrapper[16176]: E1203 14:03:06.277959 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:03:06.278030 master-0 kubenswrapper[16176]: E1203 14:03:06.277942 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:03:06.278616 master-0 kubenswrapper[16176]: E1203 14:03:06.278565 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:03:06.278691 master-0 kubenswrapper[16176]: E1203 14:03:06.278654 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:03:06.279001 master-0 kubenswrapper[16176]: E1203 14:03:06.278960 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:03:06.279001 master-0 kubenswrapper[16176]: E1203 14:03:06.278989 16176 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" Dec 03 14:03:06.279143 master-0 kubenswrapper[16176]: E1203 14:03:06.278998 16176 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:03:06.279143 master-0 kubenswrapper[16176]: E1203 14:03:06.279031 16176 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe is running failed: container process not found" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" Dec 03 14:03:06.279376 master-0 kubenswrapper[16176]: I1203 14:03:06.279319 16176 status_manager.go:851] "Failed to get status for pod" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" pod="openshift-network-diagnostics/network-check-target-pcchm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-pcchm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.299362 master-0 kubenswrapper[16176]: I1203 14:03:06.299294 16176 status_manager.go:851] "Failed to get status for pod" podUID="fd2fa610bb2a39c39fcdd00db03a511a" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.319956 master-0 kubenswrapper[16176]: I1203 14:03:06.319888 16176 status_manager.go:851] "Failed to get status for pod" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-678c7f799b-4b7nv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.339660 master-0 kubenswrapper[16176]: I1203 14:03:06.339550 16176 status_manager.go:851] "Failed to get status for pod" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" pod="openshift-console/console-c5d7cd7f9-2hp75" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-c5d7cd7f9-2hp75\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.360675 master-0 kubenswrapper[16176]: I1203 14:03:06.360610 16176 status_manager.go:851] "Failed to get status for pod" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" pod="openshift-console/console-648d88c756-vswh8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-648d88c756-vswh8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.379901 master-0 kubenswrapper[16176]: I1203 14:03:06.379833 16176 status_manager.go:851] "Failed to get status for pod" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" pod="openshift-dns/dns-default-5m4f8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-5m4f8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.405456 master-0 kubenswrapper[16176]: I1203 14:03:06.405378 16176 status_manager.go:851] "Failed to get status for pod" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-74cddd4fb5-phk6r\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.420208 master-0 kubenswrapper[16176]: I1203 14:03:06.420128 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-747bdb58b5-mn76f\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.427922 master-0 kubenswrapper[16176]: I1203 14:03:06.427871 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:03:06.428084 master-0 kubenswrapper[16176]: I1203 14:03:06.427953 16176 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:03:06.439701 master-0 kubenswrapper[16176]: I1203 14:03:06.439628 16176 status_manager.go:851] "Failed to get status for pod" podUID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" pod="openshift-dns/node-resolver-4xlhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-4xlhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.459590 master-0 kubenswrapper[16176]: I1203 14:03:06.459523 16176 status_manager.go:851] "Failed to get status for pod" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-cb84b9cdf-qn94w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.480409 master-0 kubenswrapper[16176]: I1203 14:03:06.480030 16176 status_manager.go:851] "Failed to get status for pod" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-6b7bcd6566-jh9m8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.501143 master-0 kubenswrapper[16176]: I1203 14:03:06.501082 16176 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.519377 master-0 kubenswrapper[16176]: I1203 14:03:06.519255 16176 status_manager.go:851] "Failed to get status for pod" podUID="c98a8d85d3901d33f6fe192bdc7172aa" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.539286 master-0 kubenswrapper[16176]: I1203 14:03:06.539161 16176 status_manager.go:851] "Failed to get status for pod" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-86897dd478-qqwh7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.559646 master-0 kubenswrapper[16176]: I1203 14:03:06.559509 16176 status_manager.go:851] "Failed to get status for pod" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" pod="openshift-ingress-canary/ingress-canary-vkpv4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-vkpv4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.561904 master-0 kubenswrapper[16176]: I1203 14:03:06.561839 16176 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Dec 03 14:03:06.562008 master-0 kubenswrapper[16176]: I1203 14:03:06.561935 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" Dec 03 14:03:06.580032 master-0 kubenswrapper[16176]: I1203 14:03:06.579976 16176 status_manager.go:851] "Failed to get status for pod" podUID="b495b0c38f2c54e7cc46282c5f92aab5" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.600003 master-0 kubenswrapper[16176]: I1203 14:03:06.599886 16176 status_manager.go:851] "Failed to get status for pod" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-6964bb78b7-g4lv2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.620227 master-0 kubenswrapper[16176]: I1203 14:03:06.620123 16176 status_manager.go:851] "Failed to get status for pod" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-65dc4bcb88-96zcz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.640031 master-0 kubenswrapper[16176]: I1203 14:03:06.639930 16176 status_manager.go:851] "Failed to get status for pod" podUID="d7d6a05e-beee-40e9-b376-5c22e285b27a" pod="openshift-image-registry/node-ca-4p4zh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-4p4zh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.653999 master-0 kubenswrapper[16176]: I1203 14:03:06.653929 16176 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:03:06.660606 master-0 kubenswrapper[16176]: I1203 14:03:06.660547 16176 status_manager.go:851] "Failed to get status for pod" podUID="d3200abb-a440-44db-8897-79c809c1d838" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78d987764b-xcs5w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.679911 master-0 kubenswrapper[16176]: I1203 14:03:06.679804 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-76bd5d69c7-fjrrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.700639 master-0 kubenswrapper[16176]: I1203 14:03:06.700563 16176 status_manager.go:851] "Failed to get status for pod" podUID="7bce50c457ac1f4721bc81a570dd238a" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.721004 master-0 kubenswrapper[16176]: I1203 14:03:06.720878 16176 status_manager.go:851] "Failed to get status for pod" podUID="ebf07eb54db570834b7c9a90b6b07403" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.739521 master-0 kubenswrapper[16176]: I1203 14:03:06.739437 16176 status_manager.go:851] "Failed to get status for pod" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" pod="openshift-marketplace/redhat-marketplace-ddwmn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ddwmn\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.760943 master-0 kubenswrapper[16176]: I1203 14:03:06.760865 16176 status_manager.go:851] "Failed to get status for pod" podUID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" pod="openshift-monitoring/node-exporter-b62gf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/node-exporter-b62gf\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.960835 master-0 kubenswrapper[16176]: I1203 14:03:06.960779 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:03:06.961088 master-0 kubenswrapper[16176]: I1203 14:03:06.960850 16176 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:03:06.961088 master-0 kubenswrapper[16176]: I1203 14:03:06.960909 16176 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:03:06.961088 master-0 kubenswrapper[16176]: I1203 14:03:06.961001 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:03:06.967297 master-0 kubenswrapper[16176]: I1203 14:03:06.967190 16176 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a" exitCode=0 Dec 03 14:03:06.967297 master-0 kubenswrapper[16176]: I1203 14:03:06.967274 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a"} Dec 03 14:03:06.970182 master-0 kubenswrapper[16176]: I1203 14:03:06.970137 16176 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="c15591114b17f80d4005fe7b9b93186b1001827c8ad32c7e12c1faa9d0831719" exitCode=0 Dec 03 14:03:06.970182 master-0 kubenswrapper[16176]: I1203 14:03:06.970163 16176 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="0b5ae198d8458a64b4fc5c2a2dd3e600bd9b382a477dec6dc5d365c36f83700c" exitCode=0 Dec 03 14:03:06.970366 master-0 kubenswrapper[16176]: I1203 14:03:06.970228 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"c15591114b17f80d4005fe7b9b93186b1001827c8ad32c7e12c1faa9d0831719"} Dec 03 14:03:06.970366 master-0 kubenswrapper[16176]: I1203 14:03:06.970317 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"0b5ae198d8458a64b4fc5c2a2dd3e600bd9b382a477dec6dc5d365c36f83700c"} Dec 03 14:03:06.971127 master-0 kubenswrapper[16176]: I1203 14:03:06.971093 16176 scope.go:117] "RemoveContainer" containerID="c15591114b17f80d4005fe7b9b93186b1001827c8ad32c7e12c1faa9d0831719" Dec 03 14:03:06.971127 master-0 kubenswrapper[16176]: I1203 14:03:06.971122 16176 scope.go:117] "RemoveContainer" containerID="0b5ae198d8458a64b4fc5c2a2dd3e600bd9b382a477dec6dc5d365c36f83700c" Dec 03 14:03:06.971674 master-0 kubenswrapper[16176]: I1203 14:03:06.971587 16176 status_manager.go:851] "Failed to get status for pod" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-75b4d49d4c-h599p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.972137 master-0 kubenswrapper[16176]: I1203 14:03:06.972113 16176 generic.go:334] "Generic (PLEG): container finished" podID="ec89938d-35a5-46ba-8c63-12489db18cbd" containerID="6595725af54f433a5152cc38b13503348ac89e555e54ae8506677c56b070363b" exitCode=0 Dec 03 14:03:06.972217 master-0 kubenswrapper[16176]: I1203 14:03:06.972181 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerDied","Data":"6595725af54f433a5152cc38b13503348ac89e555e54ae8506677c56b070363b"} Dec 03 14:03:06.972526 master-0 kubenswrapper[16176]: I1203 14:03:06.972495 16176 scope.go:117] "RemoveContainer" containerID="6595725af54f433a5152cc38b13503348ac89e555e54ae8506677c56b070363b" Dec 03 14:03:06.973819 master-0 kubenswrapper[16176]: I1203 14:03:06.973751 16176 status_manager.go:851] "Failed to get status for pod" podUID="da583723-b3ad-4a6f-b586-09b739bd7f8c" pod="openshift-network-node-identity/network-node-identity-c8csx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-c8csx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.974547 master-0 kubenswrapper[16176]: I1203 14:03:06.974487 16176 status_manager.go:851] "Failed to get status for pod" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-754cfd84-qf898\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.975348 master-0 kubenswrapper[16176]: I1203 14:03:06.975246 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.976023 master-0 kubenswrapper[16176]: I1203 14:03:06.975963 16176 status_manager.go:851] "Failed to get status for pod" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-565bdcb8-477pk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.976688 master-0 kubenswrapper[16176]: I1203 14:03:06.976635 16176 status_manager.go:851] "Failed to get status for pod" podUID="0535e784-8e28-4090-aa2e-df937910767c" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-7479ffdf48-hpdzl\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.977210 master-0 kubenswrapper[16176]: I1203 14:03:06.977163 16176 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.978017 master-0 kubenswrapper[16176]: I1203 14:03:06.977972 16176 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.978197 master-0 kubenswrapper[16176]: I1203 14:03:06.978148 16176 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" exitCode=0 Dec 03 14:03:06.978239 master-0 kubenswrapper[16176]: I1203 14:03:06.978178 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe"} Dec 03 14:03:06.978814 master-0 kubenswrapper[16176]: I1203 14:03:06.978774 16176 status_manager.go:851] "Failed to get status for pod" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7c64dd9d8b-49skr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.979301 master-0 kubenswrapper[16176]: I1203 14:03:06.979251 16176 status_manager.go:851] "Failed to get status for pod" podUID="adbcce01-7282-4a75-843a-9623060346f0" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7c4697b5f5-9f69p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.979777 master-0 kubenswrapper[16176]: I1203 14:03:06.979737 16176 status_manager.go:851] "Failed to get status for pod" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-85dbd94574-8jfp5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:06.999961 master-0 kubenswrapper[16176]: I1203 14:03:06.999831 16176 status_manager.go:851] "Failed to get status for pod" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-77df56447c-vsrxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.020351 master-0 kubenswrapper[16176]: I1203 14:03:07.020218 16176 status_manager.go:851] "Failed to get status for pod" podUID="6935a3f8-723e-46e6-8498-483f34bf0825" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-f9f7f4946-48mrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.040666 master-0 kubenswrapper[16176]: I1203 14:03:07.040568 16176 status_manager.go:851] "Failed to get status for pod" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5fdc576499-j2n8j\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.060254 master-0 kubenswrapper[16176]: I1203 14:03:07.060126 16176 status_manager.go:851] "Failed to get status for pod" podUID="eecc43f5-708f-4395-98cc-696b243d6321" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-pvrfs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.079446 master-0 kubenswrapper[16176]: I1203 14:03:07.079351 16176 status_manager.go:851] "Failed to get status for pod" podUID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" pod="openshift-network-operator/iptables-alerter-n24qb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-n24qb\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.100468 master-0 kubenswrapper[16176]: I1203 14:03:07.100254 16176 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.120673 master-0 kubenswrapper[16176]: I1203 14:03:07.120552 16176 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.139960 master-0 kubenswrapper[16176]: I1203 14:03:07.139806 16176 status_manager.go:851] "Failed to get status for pod" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-69cc794c58-mfjk2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.160288 master-0 kubenswrapper[16176]: I1203 14:03:07.160117 16176 status_manager.go:851] "Failed to get status for pod" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6cbf58c977-8lh6n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.180057 master-0 kubenswrapper[16176]: I1203 14:03:07.179938 16176 status_manager.go:851] "Failed to get status for pod" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" pod="openshift-network-diagnostics/network-check-target-pcchm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-pcchm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.200810 master-0 kubenswrapper[16176]: I1203 14:03:07.200669 16176 status_manager.go:851] "Failed to get status for pod" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7cf5cf757f-zgm6l\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.221116 master-0 kubenswrapper[16176]: I1203 14:03:07.220977 16176 status_manager.go:851] "Failed to get status for pod" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" pod="openshift-console/console-648d88c756-vswh8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-648d88c756-vswh8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.240719 master-0 kubenswrapper[16176]: I1203 14:03:07.240618 16176 status_manager.go:851] "Failed to get status for pod" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" pod="openshift-dns/dns-default-5m4f8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-5m4f8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.260219 master-0 kubenswrapper[16176]: I1203 14:03:07.260099 16176 status_manager.go:851] "Failed to get status for pod" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-74cddd4fb5-phk6r\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.280235 master-0 kubenswrapper[16176]: I1203 14:03:07.280105 16176 status_manager.go:851] "Failed to get status for pod" podUID="fd2fa610bb2a39c39fcdd00db03a511a" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.299892 master-0 kubenswrapper[16176]: I1203 14:03:07.299819 16176 status_manager.go:851] "Failed to get status for pod" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-678c7f799b-4b7nv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.319707 master-0 kubenswrapper[16176]: I1203 14:03:07.319573 16176 status_manager.go:851] "Failed to get status for pod" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" pod="openshift-console/console-c5d7cd7f9-2hp75" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-c5d7cd7f9-2hp75\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.340413 master-0 kubenswrapper[16176]: I1203 14:03:07.340329 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-747bdb58b5-mn76f\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.360202 master-0 kubenswrapper[16176]: I1203 14:03:07.360014 16176 status_manager.go:851] "Failed to get status for pod" podUID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" pod="openshift-dns/node-resolver-4xlhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-4xlhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.379849 master-0 kubenswrapper[16176]: I1203 14:03:07.379746 16176 status_manager.go:851] "Failed to get status for pod" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-cb84b9cdf-qn94w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.400220 master-0 kubenswrapper[16176]: I1203 14:03:07.400133 16176 status_manager.go:851] "Failed to get status for pod" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-6b7bcd6566-jh9m8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.420031 master-0 kubenswrapper[16176]: I1203 14:03:07.419943 16176 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.440095 master-0 kubenswrapper[16176]: I1203 14:03:07.440010 16176 status_manager.go:851] "Failed to get status for pod" podUID="c98a8d85d3901d33f6fe192bdc7172aa" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.459929 master-0 kubenswrapper[16176]: I1203 14:03:07.459842 16176 status_manager.go:851] "Failed to get status for pod" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" pod="openshift-ingress-canary/ingress-canary-vkpv4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-vkpv4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.480220 master-0 kubenswrapper[16176]: I1203 14:03:07.480090 16176 status_manager.go:851] "Failed to get status for pod" podUID="b495b0c38f2c54e7cc46282c5f92aab5" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.500164 master-0 kubenswrapper[16176]: I1203 14:03:07.500069 16176 status_manager.go:851] "Failed to get status for pod" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-6964bb78b7-g4lv2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.519992 master-0 kubenswrapper[16176]: I1203 14:03:07.519882 16176 status_manager.go:851] "Failed to get status for pod" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-65dc4bcb88-96zcz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.527145 master-0 kubenswrapper[16176]: I1203 14:03:07.527002 16176 patch_prober.go:28] interesting pod/dns-default-5m4f8 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" start-of-body= Dec 03 14:03:07.527344 master-0 kubenswrapper[16176]: I1203 14:03:07.527209 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" Dec 03 14:03:07.543759 master-0 kubenswrapper[16176]: I1203 14:03:07.543526 16176 status_manager.go:851] "Failed to get status for pod" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-86897dd478-qqwh7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.560182 master-0 kubenswrapper[16176]: I1203 14:03:07.560080 16176 status_manager.go:851] "Failed to get status for pod" podUID="d7d6a05e-beee-40e9-b376-5c22e285b27a" pod="openshift-image-registry/node-ca-4p4zh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-4p4zh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.579409 master-0 kubenswrapper[16176]: I1203 14:03:07.579309 16176 status_manager.go:851] "Failed to get status for pod" podUID="d3200abb-a440-44db-8897-79c809c1d838" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78d987764b-xcs5w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.599524 master-0 kubenswrapper[16176]: I1203 14:03:07.599438 16176 status_manager.go:851] "Failed to get status for pod" podUID="7bce50c457ac1f4721bc81a570dd238a" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.620218 master-0 kubenswrapper[16176]: I1203 14:03:07.620070 16176 status_manager.go:851] "Failed to get status for pod" podUID="ebf07eb54db570834b7c9a90b6b07403" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.640620 master-0 kubenswrapper[16176]: I1203 14:03:07.640570 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-76bd5d69c7-fjrrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.660118 master-0 kubenswrapper[16176]: I1203 14:03:07.659949 16176 status_manager.go:851] "Failed to get status for pod" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" pod="openshift-marketplace/redhat-marketplace-ddwmn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ddwmn\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.680990 master-0 kubenswrapper[16176]: I1203 14:03:07.680868 16176 status_manager.go:851] "Failed to get status for pod" podUID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" pod="openshift-monitoring/node-exporter-b62gf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/node-exporter-b62gf\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.700235 master-0 kubenswrapper[16176]: I1203 14:03:07.700144 16176 status_manager.go:851] "Failed to get status for pod" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.719664 master-0 kubenswrapper[16176]: I1203 14:03:07.719563 16176 status_manager.go:851] "Failed to get status for pod" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-7b795784b8-44frm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.740034 master-0 kubenswrapper[16176]: I1203 14:03:07.739893 16176 status_manager.go:851] "Failed to get status for pod" podUID="1c562495-1290-4792-b4b2-639faa594ae2" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-667484ff5-n7qz8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.760637 master-0 kubenswrapper[16176]: I1203 14:03:07.760518 16176 status_manager.go:851] "Failed to get status for pod" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-56f5898f45-fhnc5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.779909 master-0 kubenswrapper[16176]: I1203 14:03:07.779803 16176 status_manager.go:851] "Failed to get status for pod" podUID="19c2a40b-213c-42f1-9459-87c2e780a75f" pod="openshift-multus/multus-additional-cni-plugins-42hmk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-42hmk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.800894 master-0 kubenswrapper[16176]: I1203 14:03:07.800805 16176 status_manager.go:851] "Failed to get status for pod" podUID="911f6333-cdb0-425c-b79b-f892444b7097" pod="openshift-marketplace/redhat-operators-6z4sc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6z4sc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.819874 master-0 kubenswrapper[16176]: I1203 14:03:07.819782 16176 status_manager.go:851] "Failed to get status for pod" podUID="52100521-67e9-40c9-887c-eda6560f06e0" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-7978bf889c-n64v4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.840440 master-0 kubenswrapper[16176]: I1203 14:03:07.840328 16176 status_manager.go:851] "Failed to get status for pod" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-5f78c89466-bshxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.860017 master-0 kubenswrapper[16176]: I1203 14:03:07.859850 16176 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.881480 master-0 kubenswrapper[16176]: I1203 14:03:07.881247 16176 status_manager.go:851] "Failed to get status for pod" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-57cbc648f8-q4cgg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.900618 master-0 kubenswrapper[16176]: I1203 14:03:07.900479 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b681889-eb2c-41fb-a1dc-69b99227b45b" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.920034 master-0 kubenswrapper[16176]: I1203 14:03:07.919958 16176 status_manager.go:851] "Failed to get status for pod" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-cc996c4bd-j4hzr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: I1203 14:03:07.930217 16176 patch_prober.go:28] interesting pod/apiserver-6985f84b49-v9vlg container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]log ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]etcd excluded: ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]etcd-readiness excluded: ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]informer-sync ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/max-in-flight-filter ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-startinformers ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: [-]shutdown failed: reason withheld Dec 03 14:03:07.930324 master-0 kubenswrapper[16176]: readyz check failed Dec 03 14:03:07.931572 master-0 kubenswrapper[16176]: I1203 14:03:07.930330 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:03:07.940684 master-0 kubenswrapper[16176]: I1203 14:03:07.940552 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" pod="openshift-multus/network-metrics-daemon-ch7xd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-ch7xd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.959998 master-0 kubenswrapper[16176]: I1203 14:03:07.959886 16176 status_manager.go:851] "Failed to get status for pod" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-6985f84b49-v9vlg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:07.980462 master-0 kubenswrapper[16176]: I1203 14:03:07.980353 16176 status_manager.go:851] "Failed to get status for pod" podUID="22673f47-9484-4eed-bbce-888588c754ed" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5bdcc987c4-x99xc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.000683 master-0 kubenswrapper[16176]: I1203 14:03:08.000577 16176 status_manager.go:851] "Failed to get status for pod" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.021403 master-0 kubenswrapper[16176]: I1203 14:03:08.021179 16176 status_manager.go:851] "Failed to get status for pod" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" pod="openshift-console/downloads-6f5db8559b-96ljh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-6f5db8559b-96ljh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.040692 master-0 kubenswrapper[16176]: I1203 14:03:08.040574 16176 status_manager.go:851] "Failed to get status for pod" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-547cc9cc49-kqs4k\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.060381 master-0 kubenswrapper[16176]: I1203 14:03:08.060221 16176 status_manager.go:851] "Failed to get status for pod" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.081506 master-0 kubenswrapper[16176]: I1203 14:03:08.081342 16176 status_manager.go:851] "Failed to get status for pod" podUID="0b1e0884-ff54-419b-90d3-25f561a6391d" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.100655 master-0 kubenswrapper[16176]: I1203 14:03:08.100538 16176 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.120173 master-0 kubenswrapper[16176]: I1203 14:03:08.120064 16176 status_manager.go:851] "Failed to get status for pod" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7dcc7f9bd6-68wml\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.140771 master-0 kubenswrapper[16176]: I1203 14:03:08.140698 16176 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.160246 master-0 kubenswrapper[16176]: I1203 14:03:08.160158 16176 status_manager.go:851] "Failed to get status for pod" podUID="77430348-b53a-4898-8047-be8bb542a0a7" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-txl6b\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.180928 master-0 kubenswrapper[16176]: I1203 14:03:08.180859 16176 status_manager.go:851] "Failed to get status for pod" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-664c9d94c9-9vfr4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.200628 master-0 kubenswrapper[16176]: I1203 14:03:08.200557 16176 status_manager.go:851] "Failed to get status for pod" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-2ztl9\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.220682 master-0 kubenswrapper[16176]: I1203 14:03:08.220587 16176 status_manager.go:851] "Failed to get status for pod" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-57cbc648f8-q4cgg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.240946 master-0 kubenswrapper[16176]: I1203 14:03:08.240863 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b681889-eb2c-41fb-a1dc-69b99227b45b" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.260492 master-0 kubenswrapper[16176]: I1203 14:03:08.260394 16176 status_manager.go:851] "Failed to get status for pod" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-cc996c4bd-j4hzr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.281278 master-0 kubenswrapper[16176]: I1203 14:03:08.280880 16176 status_manager.go:851] "Failed to get status for pod" podUID="52100521-67e9-40c9-887c-eda6560f06e0" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/pods/etcd-operator-7978bf889c-n64v4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.300235 master-0 kubenswrapper[16176]: I1203 14:03:08.300123 16176 status_manager.go:851] "Failed to get status for pod" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-5f78c89466-bshxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.320842 master-0 kubenswrapper[16176]: I1203 14:03:08.320717 16176 status_manager.go:851] "Failed to get status for pod" podUID="ec89938d-35a5-46ba-8c63-12489db18cbd" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-7c49fbfc6f-7krqx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.340803 master-0 kubenswrapper[16176]: I1203 14:03:08.340669 16176 status_manager.go:851] "Failed to get status for pod" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-6985f84b49-v9vlg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.358599 master-0 kubenswrapper[16176]: E1203 14:03:08.358518 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.359548 master-0 kubenswrapper[16176]: I1203 14:03:08.359467 16176 status_manager.go:851] "Failed to get status for pod" podUID="22673f47-9484-4eed-bbce-888588c754ed" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5bdcc987c4-x99xc\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.359612 master-0 kubenswrapper[16176]: E1203 14:03:08.359495 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.360099 master-0 kubenswrapper[16176]: E1203 14:03:08.360050 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.360501 master-0 kubenswrapper[16176]: E1203 14:03:08.360465 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.360851 master-0 kubenswrapper[16176]: E1203 14:03:08.360817 16176 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.360851 master-0 kubenswrapper[16176]: I1203 14:03:08.360845 16176 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 14:03:08.361246 master-0 kubenswrapper[16176]: E1203 14:03:08.361189 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 14:03:08.380168 master-0 kubenswrapper[16176]: I1203 14:03:08.380099 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" pod="openshift-multus/network-metrics-daemon-ch7xd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-ch7xd\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.400149 master-0 kubenswrapper[16176]: I1203 14:03:08.400022 16176 status_manager.go:851] "Failed to get status for pod" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" pod="openshift-console/downloads-6f5db8559b-96ljh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-6f5db8559b-96ljh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.419526 master-0 kubenswrapper[16176]: I1203 14:03:08.419471 16176 status_manager.go:851] "Failed to get status for pod" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-547cc9cc49-kqs4k\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.440623 master-0 kubenswrapper[16176]: I1203 14:03:08.440537 16176 status_manager.go:851] "Failed to get status for pod" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.460652 master-0 kubenswrapper[16176]: I1203 14:03:08.460563 16176 status_manager.go:851] "Failed to get status for pod" podUID="0b1e0884-ff54-419b-90d3-25f561a6391d" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.479627 master-0 kubenswrapper[16176]: I1203 14:03:08.479578 16176 status_manager.go:851] "Failed to get status for pod" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.499687 master-0 kubenswrapper[16176]: I1203 14:03:08.499609 16176 status_manager.go:851] "Failed to get status for pod" podUID="b340553b-d483-4839-8328-518f27770832" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-6d64b47964-jjd7h\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.519895 master-0 kubenswrapper[16176]: I1203 14:03:08.519774 16176 status_manager.go:851] "Failed to get status for pod" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7dcc7f9bd6-68wml\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: I1203 14:03:08.534992 16176 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]log ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]etcd excluded: ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]etcd-readiness excluded: ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]informer-sync ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/max-in-flight-filter ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-StartUserInformer ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-StartOAuthInformer ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: [-]shutdown failed: reason withheld Dec 03 14:03:08.535091 master-0 kubenswrapper[16176]: readyz check failed Dec 03 14:03:08.536001 master-0 kubenswrapper[16176]: I1203 14:03:08.535106 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:03:08.540338 master-0 kubenswrapper[16176]: I1203 14:03:08.540186 16176 status_manager.go:851] "Failed to get status for pod" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" pod="openshift-marketplace/certified-operators-t8rt7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-t8rt7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.560885 master-0 kubenswrapper[16176]: I1203 14:03:08.560729 16176 status_manager.go:851] "Failed to get status for pod" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-664c9d94c9-9vfr4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.562333 master-0 kubenswrapper[16176]: E1203 14:03:08.562232 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 14:03:08.581035 master-0 kubenswrapper[16176]: I1203 14:03:08.580890 16176 status_manager.go:851] "Failed to get status for pod" podUID="77430348-b53a-4898-8047-be8bb542a0a7" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-txl6b\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.600099 master-0 kubenswrapper[16176]: I1203 14:03:08.599989 16176 status_manager.go:851] "Failed to get status for pod" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-754cfd84-qf898\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.614414 master-0 kubenswrapper[16176]: I1203 14:03:08.614349 16176 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:03:08.620608 master-0 kubenswrapper[16176]: I1203 14:03:08.620517 16176 status_manager.go:851] "Failed to get status for pod" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-f84784664-ntb9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.639846 master-0 kubenswrapper[16176]: I1203 14:03:08.639745 16176 status_manager.go:851] "Failed to get status for pod" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-565bdcb8-477pk\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.660292 master-0 kubenswrapper[16176]: I1203 14:03:08.660129 16176 status_manager.go:851] "Failed to get status for pod" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-75b4d49d4c-h599p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.680693 master-0 kubenswrapper[16176]: I1203 14:03:08.680563 16176 status_manager.go:851] "Failed to get status for pod" podUID="da583723-b3ad-4a6f-b586-09b739bd7f8c" pod="openshift-network-node-identity/network-node-identity-c8csx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-c8csx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.700641 master-0 kubenswrapper[16176]: I1203 14:03:08.700511 16176 status_manager.go:851] "Failed to get status for pod" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-59d99f9b7b-74sss\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.720907 master-0 kubenswrapper[16176]: I1203 14:03:08.720754 16176 status_manager.go:851] "Failed to get status for pod" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" pod="openshift-marketplace/community-operators-7fwtv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7fwtv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.740837 master-0 kubenswrapper[16176]: I1203 14:03:08.740725 16176 status_manager.go:851] "Failed to get status for pod" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7c64dd9d8b-49skr\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.760334 master-0 kubenswrapper[16176]: I1203 14:03:08.760157 16176 status_manager.go:851] "Failed to get status for pod" podUID="0535e784-8e28-4090-aa2e-df937910767c" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-7479ffdf48-hpdzl\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.780813 master-0 kubenswrapper[16176]: I1203 14:03:08.780721 16176 status_manager.go:851] "Failed to get status for pod" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-85dbd94574-8jfp5\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.800158 master-0 kubenswrapper[16176]: I1203 14:03:08.800050 16176 status_manager.go:851] "Failed to get status for pod" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-77df56447c-vsrxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.820915 master-0 kubenswrapper[16176]: I1203 14:03:08.820799 16176 status_manager.go:851] "Failed to get status for pod" podUID="6935a3f8-723e-46e6-8498-483f34bf0825" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-f9f7f4946-48mrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.840672 master-0 kubenswrapper[16176]: I1203 14:03:08.840580 16176 status_manager.go:851] "Failed to get status for pod" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5fdc576499-j2n8j\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.860573 master-0 kubenswrapper[16176]: I1203 14:03:08.860424 16176 status_manager.go:851] "Failed to get status for pod" podUID="eecc43f5-708f-4395-98cc-696b243d6321" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-pvrfs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.880450 master-0 kubenswrapper[16176]: I1203 14:03:08.880315 16176 status_manager.go:851] "Failed to get status for pod" podUID="adbcce01-7282-4a75-843a-9623060346f0" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-7c4697b5f5-9f69p\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.901063 master-0 kubenswrapper[16176]: I1203 14:03:08.900814 16176 status_manager.go:851] "Failed to get status for pod" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-7f88444875-6dk29\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.920184 master-0 kubenswrapper[16176]: I1203 14:03:08.919975 16176 status_manager.go:851] "Failed to get status for pod" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-7486ff55f-wcnxg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.940553 master-0 kubenswrapper[16176]: I1203 14:03:08.940445 16176 status_manager.go:851] "Failed to get status for pod" podUID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" pod="openshift-network-operator/iptables-alerter-n24qb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-n24qb\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.959987 master-0 kubenswrapper[16176]: I1203 14:03:08.959876 16176 status_manager.go:851] "Failed to get status for pod" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" pod="openshift-network-diagnostics/network-check-target-pcchm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-pcchm\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.963409 master-0 kubenswrapper[16176]: E1203 14:03:08.963357 16176 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 14:03:08.980312 master-0 kubenswrapper[16176]: I1203 14:03:08.980235 16176 status_manager.go:851] "Failed to get status for pod" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-7cf5cf757f-zgm6l\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:08.999185 master-0 kubenswrapper[16176]: I1203 14:03:08.999098 16176 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-7c4dc67499-tjwg8_eefee934-ac6b-44e3-a6be-1ae62362ab4f/cloud-credential-operator/0.log" Dec 03 14:03:08.999802 master-0 kubenswrapper[16176]: I1203 14:03:08.999759 16176 generic.go:334] "Generic (PLEG): container finished" podID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" containerID="f1da71cf736ec67987c67a2c683d673658afbbde7b7d3088a88079d70f7698eb" exitCode=1 Dec 03 14:03:08.999855 master-0 kubenswrapper[16176]: I1203 14:03:08.999813 16176 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerDied","Data":"f1da71cf736ec67987c67a2c683d673658afbbde7b7d3088a88079d70f7698eb"} Dec 03 14:03:09.000574 master-0 kubenswrapper[16176]: I1203 14:03:09.000461 16176 status_manager.go:851] "Failed to get status for pod" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-69cc794c58-mfjk2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.013485 master-0 kubenswrapper[16176]: I1203 14:03:09.013405 16176 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Dec 03 14:03:09.013559 master-0 kubenswrapper[16176]: I1203 14:03:09.013514 16176 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Dec 03 14:03:09.020952 master-0 kubenswrapper[16176]: I1203 14:03:09.020852 16176 status_manager.go:851] "Failed to get status for pod" podUID="e97e1725-cb55-4ce3-952d-a4fd0731577d" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6cbf58c977-8lh6n\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.040944 master-0 kubenswrapper[16176]: I1203 14:03:09.040833 16176 status_manager.go:851] "Failed to get status for pod" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-74cddd4fb5-phk6r\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.060909 master-0 kubenswrapper[16176]: I1203 14:03:09.060755 16176 status_manager.go:851] "Failed to get status for pod" podUID="fd2fa610bb2a39c39fcdd00db03a511a" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.080576 master-0 kubenswrapper[16176]: I1203 14:03:09.080390 16176 status_manager.go:851] "Failed to get status for pod" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-678c7f799b-4b7nv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.100483 master-0 kubenswrapper[16176]: I1203 14:03:09.100393 16176 status_manager.go:851] "Failed to get status for pod" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" pod="openshift-console/console-c5d7cd7f9-2hp75" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-c5d7cd7f9-2hp75\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.120962 master-0 kubenswrapper[16176]: I1203 14:03:09.120816 16176 status_manager.go:851] "Failed to get status for pod" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" pod="openshift-console/console-648d88c756-vswh8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-648d88c756-vswh8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.140582 master-0 kubenswrapper[16176]: I1203 14:03:09.140445 16176 status_manager.go:851] "Failed to get status for pod" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" pod="openshift-dns/dns-default-5m4f8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-5m4f8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.161057 master-0 kubenswrapper[16176]: I1203 14:03:09.160927 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-747bdb58b5-mn76f\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.181683 master-0 kubenswrapper[16176]: I1203 14:03:09.181199 16176 status_manager.go:851] "Failed to get status for pod" podUID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" pod="openshift-dns/node-resolver-4xlhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-4xlhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.201054 master-0 kubenswrapper[16176]: I1203 14:03:09.200825 16176 status_manager.go:851] "Failed to get status for pod" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-cb84b9cdf-qn94w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.220530 master-0 kubenswrapper[16176]: I1203 14:03:09.220425 16176 status_manager.go:851] "Failed to get status for pod" podUID="c98a8d85d3901d33f6fe192bdc7172aa" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.240592 master-0 kubenswrapper[16176]: I1203 14:03:09.240479 16176 status_manager.go:851] "Failed to get status for pod" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-6b7bcd6566-jh9m8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.260653 master-0 kubenswrapper[16176]: I1203 14:03:09.260542 16176 status_manager.go:851] "Failed to get status for pod" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-7c4dc67499-tjwg8\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.280516 master-0 kubenswrapper[16176]: I1203 14:03:09.280459 16176 status_manager.go:851] "Failed to get status for pod" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-65dc4bcb88-96zcz\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.300170 master-0 kubenswrapper[16176]: I1203 14:03:09.300086 16176 status_manager.go:851] "Failed to get status for pod" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-86897dd478-qqwh7\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.320551 master-0 kubenswrapper[16176]: I1203 14:03:09.320429 16176 status_manager.go:851] "Failed to get status for pod" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" pod="openshift-ingress-canary/ingress-canary-vkpv4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-vkpv4\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.340890 master-0 kubenswrapper[16176]: I1203 14:03:09.340804 16176 status_manager.go:851] "Failed to get status for pod" podUID="b495b0c38f2c54e7cc46282c5f92aab5" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.361148 master-0 kubenswrapper[16176]: I1203 14:03:09.360903 16176 status_manager.go:851] "Failed to get status for pod" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-6964bb78b7-g4lv2\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.380923 master-0 kubenswrapper[16176]: I1203 14:03:09.380802 16176 status_manager.go:851] "Failed to get status for pod" podUID="d7d6a05e-beee-40e9-b376-5c22e285b27a" pod="openshift-image-registry/node-ca-4p4zh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/node-ca-4p4zh\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.400360 master-0 kubenswrapper[16176]: I1203 14:03:09.400200 16176 status_manager.go:851] "Failed to get status for pod" podUID="d3200abb-a440-44db-8897-79c809c1d838" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-78d987764b-xcs5w\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.420703 master-0 kubenswrapper[16176]: I1203 14:03:09.420605 16176 status_manager.go:851] "Failed to get status for pod" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-76bd5d69c7-fjrrg\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.440284 master-0 kubenswrapper[16176]: I1203 14:03:09.440033 16176 status_manager.go:851] "Failed to get status for pod" podUID="7bce50c457ac1f4721bc81a570dd238a" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.461238 master-0 kubenswrapper[16176]: I1203 14:03:09.461106 16176 status_manager.go:851] "Failed to get status for pod" podUID="ebf07eb54db570834b7c9a90b6b07403" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.480380 master-0 kubenswrapper[16176]: I1203 14:03:09.480312 16176 status_manager.go:851] "Failed to get status for pod" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" pod="openshift-marketplace/redhat-marketplace-ddwmn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ddwmn\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:03:09.495127 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:03:09.536941 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:03:09.537304 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:03:09.537835 master-0 systemd[1]: kubelet.service: Consumed 57.375s CPU time. -- Boot 764a923eeafb47f486359cb972b9b445 -- Dec 03 14:07:51.971023 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:07:52.153965 master-0 kubenswrapper[3187]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:07:52.153965 master-0 kubenswrapper[3187]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:07:52.153965 master-0 kubenswrapper[3187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:07:52.153965 master-0 kubenswrapper[3187]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:07:52.157193 master-0 kubenswrapper[3187]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:07:52.157193 master-0 kubenswrapper[3187]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:07:52.157193 master-0 kubenswrapper[3187]: I1203 14:07:52.154267 3187 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:07:52.157859 master-0 kubenswrapper[3187]: W1203 14:07:52.157829 3187 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:07:52.157859 master-0 kubenswrapper[3187]: W1203 14:07:52.157845 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:07:52.157859 master-0 kubenswrapper[3187]: W1203 14:07:52.157850 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:07:52.157859 master-0 kubenswrapper[3187]: W1203 14:07:52.157854 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157859 3187 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157895 3187 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157900 3187 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157905 3187 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157908 3187 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157912 3187 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157916 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157920 3187 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157924 3187 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157928 3187 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157932 3187 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157937 3187 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157942 3187 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157947 3187 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157952 3187 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157956 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157960 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157963 3187 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:07:52.158032 master-0 kubenswrapper[3187]: W1203 14:07:52.157968 3187 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157971 3187 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157976 3187 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157981 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157985 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157989 3187 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157992 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.157996 3187 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158000 3187 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158004 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158007 3187 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158011 3187 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158015 3187 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158019 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158023 3187 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158027 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158031 3187 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158036 3187 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158041 3187 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:07:52.158604 master-0 kubenswrapper[3187]: W1203 14:07:52.158045 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158049 3187 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158053 3187 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158056 3187 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158060 3187 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158065 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158068 3187 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158072 3187 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158075 3187 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158079 3187 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158083 3187 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158086 3187 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158090 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158094 3187 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158098 3187 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158102 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158105 3187 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158108 3187 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158112 3187 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158116 3187 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:07:52.159148 master-0 kubenswrapper[3187]: W1203 14:07:52.158119 3187 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158123 3187 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158127 3187 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158130 3187 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158138 3187 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158142 3187 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158145 3187 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158149 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158152 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158155 3187 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: W1203 14:07:52.158159 3187 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158248 3187 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158257 3187 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158269 3187 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158275 3187 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158280 3187 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158286 3187 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158292 3187 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158297 3187 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158301 3187 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158306 3187 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:07:52.160111 master-0 kubenswrapper[3187]: I1203 14:07:52.158313 3187 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158318 3187 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158322 3187 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158327 3187 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158330 3187 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158335 3187 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158339 3187 flags.go:64] FLAG: --cloud-config="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158343 3187 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158347 3187 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158360 3187 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158364 3187 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158368 3187 flags.go:64] FLAG: --config-dir="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158373 3187 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158377 3187 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158394 3187 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158402 3187 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158408 3187 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158428 3187 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158434 3187 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158440 3187 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158445 3187 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158450 3187 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158456 3187 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158462 3187 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158468 3187 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:07:52.160644 master-0 kubenswrapper[3187]: I1203 14:07:52.158473 3187 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158478 3187 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158482 3187 flags.go:64] FLAG: --enable-server="true" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158486 3187 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158492 3187 flags.go:64] FLAG: --event-burst="100" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158496 3187 flags.go:64] FLAG: --event-qps="50" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158501 3187 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158505 3187 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158509 3187 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158514 3187 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158518 3187 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158523 3187 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158527 3187 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158531 3187 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158535 3187 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158539 3187 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158543 3187 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158547 3187 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158551 3187 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158558 3187 flags.go:64] FLAG: --feature-gates="" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158563 3187 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158567 3187 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158573 3187 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158577 3187 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158581 3187 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158585 3187 flags.go:64] FLAG: --help="false" Dec 03 14:07:52.161239 master-0 kubenswrapper[3187]: I1203 14:07:52.158589 3187 flags.go:64] FLAG: --hostname-override="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158594 3187 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158598 3187 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158602 3187 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158607 3187 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158611 3187 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158615 3187 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158620 3187 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158623 3187 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158628 3187 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158632 3187 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158636 3187 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158640 3187 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158644 3187 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158648 3187 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158652 3187 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158656 3187 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158661 3187 flags.go:64] FLAG: --lock-file="" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158665 3187 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158669 3187 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158674 3187 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158680 3187 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158685 3187 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158689 3187 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158693 3187 flags.go:64] FLAG: --logging-format="text" Dec 03 14:07:52.161915 master-0 kubenswrapper[3187]: I1203 14:07:52.158699 3187 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158703 3187 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158708 3187 flags.go:64] FLAG: --manifest-url="" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158714 3187 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158719 3187 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158724 3187 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158729 3187 flags.go:64] FLAG: --max-pods="110" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158733 3187 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158737 3187 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158741 3187 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158745 3187 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158750 3187 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158754 3187 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158758 3187 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158767 3187 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158771 3187 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158775 3187 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158779 3187 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158783 3187 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158789 3187 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158793 3187 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158797 3187 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158801 3187 flags.go:64] FLAG: --port="10250" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158805 3187 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:07:52.162562 master-0 kubenswrapper[3187]: I1203 14:07:52.158809 3187 flags.go:64] FLAG: --provider-id="" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158814 3187 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158817 3187 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158821 3187 flags.go:64] FLAG: --register-node="true" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158826 3187 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158830 3187 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158837 3187 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158841 3187 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158847 3187 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158851 3187 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158857 3187 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158861 3187 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158867 3187 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158871 3187 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158875 3187 flags.go:64] FLAG: --runonce="false" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158879 3187 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158883 3187 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158887 3187 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158891 3187 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158895 3187 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158899 3187 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158903 3187 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158907 3187 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158911 3187 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158915 3187 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:07:52.163137 master-0 kubenswrapper[3187]: I1203 14:07:52.158919 3187 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158923 3187 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158928 3187 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158932 3187 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158936 3187 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158942 3187 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158946 3187 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158950 3187 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158956 3187 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158960 3187 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158964 3187 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158968 3187 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158972 3187 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158976 3187 flags.go:64] FLAG: --v="2" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158982 3187 flags.go:64] FLAG: --version="false" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158989 3187 flags.go:64] FLAG: --vmodule="" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158994 3187 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: I1203 14:07:52.158998 3187 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159125 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159136 3187 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159140 3187 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159144 3187 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159147 3187 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159151 3187 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:07:52.164007 master-0 kubenswrapper[3187]: W1203 14:07:52.159155 3187 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159159 3187 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159162 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159167 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159170 3187 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159174 3187 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159178 3187 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159182 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159185 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159189 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159192 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159195 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159199 3187 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159203 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159206 3187 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159210 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159213 3187 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159218 3187 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159222 3187 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:07:52.164661 master-0 kubenswrapper[3187]: W1203 14:07:52.159226 3187 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159229 3187 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159233 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159236 3187 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159242 3187 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159246 3187 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159249 3187 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159252 3187 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159257 3187 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159261 3187 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159264 3187 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159268 3187 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159272 3187 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159275 3187 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159279 3187 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159282 3187 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159286 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159290 3187 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159295 3187 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159299 3187 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:07:52.165159 master-0 kubenswrapper[3187]: W1203 14:07:52.159302 3187 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159306 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159310 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159313 3187 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159318 3187 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159322 3187 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159325 3187 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159329 3187 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159332 3187 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159336 3187 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159341 3187 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159345 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159350 3187 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159354 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159358 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159362 3187 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159367 3187 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159371 3187 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159374 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:07:52.165822 master-0 kubenswrapper[3187]: W1203 14:07:52.159378 3187 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159383 3187 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159387 3187 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159390 3187 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159394 3187 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159397 3187 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159401 3187 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: W1203 14:07:52.159404 3187 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:07:52.166441 master-0 kubenswrapper[3187]: I1203 14:07:52.159433 3187 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:07:52.166859 master-0 kubenswrapper[3187]: I1203 14:07:52.166809 3187 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:07:52.166859 master-0 kubenswrapper[3187]: I1203 14:07:52.166852 3187 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:07:52.166960 master-0 kubenswrapper[3187]: W1203 14:07:52.166938 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:07:52.166960 master-0 kubenswrapper[3187]: W1203 14:07:52.166951 3187 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:07:52.166960 master-0 kubenswrapper[3187]: W1203 14:07:52.166955 3187 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:07:52.166960 master-0 kubenswrapper[3187]: W1203 14:07:52.166961 3187 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:07:52.166960 master-0 kubenswrapper[3187]: W1203 14:07:52.166965 3187 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166969 3187 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166973 3187 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166977 3187 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166981 3187 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166984 3187 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166988 3187 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166991 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166995 3187 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.166998 3187 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167002 3187 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167006 3187 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167010 3187 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167013 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167017 3187 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167021 3187 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167024 3187 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167028 3187 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167031 3187 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167035 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167039 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:07:52.167092 master-0 kubenswrapper[3187]: W1203 14:07:52.167043 3187 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167046 3187 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167052 3187 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167055 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167059 3187 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167063 3187 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167067 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167070 3187 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167075 3187 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167080 3187 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167084 3187 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167087 3187 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167092 3187 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167099 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167104 3187 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167109 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167114 3187 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167118 3187 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167122 3187 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:07:52.167599 master-0 kubenswrapper[3187]: W1203 14:07:52.167126 3187 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167130 3187 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167133 3187 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167137 3187 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167142 3187 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167146 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167150 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167153 3187 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167157 3187 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167161 3187 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167164 3187 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167168 3187 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167172 3187 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167175 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167179 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167183 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167188 3187 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167193 3187 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167197 3187 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:07:52.168056 master-0 kubenswrapper[3187]: W1203 14:07:52.167202 3187 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167206 3187 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167210 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167213 3187 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167217 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167221 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167225 3187 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167230 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167233 3187 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: I1203 14:07:52.167240 3187 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167360 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167366 3187 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167369 3187 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167373 3187 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167377 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:07:52.168675 master-0 kubenswrapper[3187]: W1203 14:07:52.167380 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167384 3187 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167387 3187 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167391 3187 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167395 3187 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167399 3187 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167402 3187 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167406 3187 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167411 3187 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167428 3187 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167432 3187 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167436 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167440 3187 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167443 3187 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167449 3187 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167453 3187 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167456 3187 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167460 3187 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167464 3187 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:07:52.169044 master-0 kubenswrapper[3187]: W1203 14:07:52.167468 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167472 3187 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167477 3187 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167481 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167484 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167488 3187 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167491 3187 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167495 3187 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167498 3187 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167502 3187 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167505 3187 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167509 3187 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167513 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167516 3187 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167520 3187 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167523 3187 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167527 3187 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167531 3187 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167534 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167537 3187 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:07:52.169709 master-0 kubenswrapper[3187]: W1203 14:07:52.167542 3187 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167546 3187 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167551 3187 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167555 3187 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167558 3187 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167562 3187 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167566 3187 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167570 3187 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167573 3187 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167576 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167580 3187 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167583 3187 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167589 3187 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167592 3187 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167597 3187 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167601 3187 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167606 3187 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167610 3187 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167616 3187 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167621 3187 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:07:52.170203 master-0 kubenswrapper[3187]: W1203 14:07:52.167625 3187 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167628 3187 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167632 3187 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167637 3187 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167641 3187 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167646 3187 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167650 3187 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: W1203 14:07:52.167654 3187 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: I1203 14:07:52.167660 3187 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:07:52.170971 master-0 kubenswrapper[3187]: I1203 14:07:52.168002 3187 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:07:52.172882 master-0 kubenswrapper[3187]: I1203 14:07:52.172828 3187 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:07:52.172990 master-0 kubenswrapper[3187]: I1203 14:07:52.172970 3187 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:07:52.173601 master-0 kubenswrapper[3187]: I1203 14:07:52.173578 3187 server.go:997] "Starting client certificate rotation" Dec 03 14:07:52.173601 master-0 kubenswrapper[3187]: I1203 14:07:52.173595 3187 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:07:52.174196 master-0 kubenswrapper[3187]: I1203 14:07:52.174079 3187 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 09:07:05.562453236 +0000 UTC Dec 03 14:07:52.174260 master-0 kubenswrapper[3187]: I1203 14:07:52.174193 3187 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h59m13.388265454s for next certificate rotation Dec 03 14:07:52.179648 master-0 kubenswrapper[3187]: I1203 14:07:52.179384 3187 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:07:52.182976 master-0 kubenswrapper[3187]: I1203 14:07:52.182938 3187 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:07:52.192704 master-0 kubenswrapper[3187]: I1203 14:07:52.192686 3187 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:07:52.223381 master-0 kubenswrapper[3187]: I1203 14:07:52.223318 3187 log.go:25] "Validated CRI v1 image API" Dec 03 14:07:52.225271 master-0 kubenswrapper[3187]: I1203 14:07:52.225234 3187 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:07:52.228571 master-0 kubenswrapper[3187]: I1203 14:07:52.228530 3187 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:07:52.228674 master-0 kubenswrapper[3187]: I1203 14:07:52.228564 3187 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Dec 03 14:07:52.245145 master-0 kubenswrapper[3187]: I1203 14:07:52.244887 3187 manager.go:217] Machine: {Timestamp:2025-12-03 14:07:52.24368295 +0000 UTC m=+0.210218865 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:764a923e-eafb-47f4-8635-9cb972b9b445 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:36:91:5c:9c:b9:c3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:07:52.245145 master-0 kubenswrapper[3187]: I1203 14:07:52.245123 3187 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:07:52.245281 master-0 kubenswrapper[3187]: I1203 14:07:52.245250 3187 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:07:52.246934 master-0 kubenswrapper[3187]: I1203 14:07:52.246874 3187 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:07:52.247832 master-0 kubenswrapper[3187]: I1203 14:07:52.247725 3187 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:07:52.248048 master-0 kubenswrapper[3187]: I1203 14:07:52.247830 3187 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:07:52.248090 master-0 kubenswrapper[3187]: I1203 14:07:52.248075 3187 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:07:52.248090 master-0 kubenswrapper[3187]: I1203 14:07:52.248088 3187 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:07:52.248433 master-0 kubenswrapper[3187]: I1203 14:07:52.248396 3187 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:07:52.248483 master-0 kubenswrapper[3187]: I1203 14:07:52.248463 3187 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:07:52.248900 master-0 kubenswrapper[3187]: I1203 14:07:52.248881 3187 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:07:52.249010 master-0 kubenswrapper[3187]: I1203 14:07:52.248995 3187 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:07:52.251123 master-0 kubenswrapper[3187]: I1203 14:07:52.251090 3187 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:07:52.251217 master-0 kubenswrapper[3187]: I1203 14:07:52.251145 3187 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:07:52.251217 master-0 kubenswrapper[3187]: I1203 14:07:52.251181 3187 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:07:52.251217 master-0 kubenswrapper[3187]: I1203 14:07:52.251199 3187 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:07:52.251217 master-0 kubenswrapper[3187]: I1203 14:07:52.251217 3187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:07:52.253025 master-0 kubenswrapper[3187]: I1203 14:07:52.252990 3187 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:07:52.253359 master-0 kubenswrapper[3187]: I1203 14:07:52.253325 3187 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:07:52.254296 master-0 kubenswrapper[3187]: I1203 14:07:52.254262 3187 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:07:52.254657 master-0 kubenswrapper[3187]: I1203 14:07:52.254627 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:07:52.254657 master-0 kubenswrapper[3187]: I1203 14:07:52.254654 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254665 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254673 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254693 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254705 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254717 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254725 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254734 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254743 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254758 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:07:52.254804 master-0 kubenswrapper[3187]: I1203 14:07:52.254776 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:07:52.255322 master-0 kubenswrapper[3187]: I1203 14:07:52.255306 3187 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:07:52.255800 master-0 kubenswrapper[3187]: W1203 14:07:52.255731 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:52.255800 master-0 kubenswrapper[3187]: W1203 14:07:52.255732 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:52.255954 master-0 kubenswrapper[3187]: I1203 14:07:52.255819 3187 server.go:1280] "Started kubelet" Dec 03 14:07:52.255954 master-0 kubenswrapper[3187]: E1203 14:07:52.255847 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:52.255954 master-0 kubenswrapper[3187]: E1203 14:07:52.255841 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:52.256109 master-0 kubenswrapper[3187]: I1203 14:07:52.256067 3187 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:07:52.256247 master-0 kubenswrapper[3187]: I1203 14:07:52.256057 3187 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:07:52.256247 master-0 kubenswrapper[3187]: I1203 14:07:52.256150 3187 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:07:52.256578 master-0 kubenswrapper[3187]: I1203 14:07:52.256546 3187 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:07:52.256872 master-0 kubenswrapper[3187]: I1203 14:07:52.256806 3187 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:52.257754 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:07:52.258440 master-0 kubenswrapper[3187]: I1203 14:07:52.258391 3187 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:07:52.259033 master-0 kubenswrapper[3187]: E1203 14:07:52.258449 3187 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187db9c216c3339e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:07:52.255787934 +0000 UTC m=+0.222323829,LastTimestamp:2025-12-03 14:07:52.255787934 +0000 UTC m=+0.222323829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:07:52.260824 master-0 kubenswrapper[3187]: I1203 14:07:52.260785 3187 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:07:52.260824 master-0 kubenswrapper[3187]: I1203 14:07:52.260824 3187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:07:52.261312 master-0 kubenswrapper[3187]: I1203 14:07:52.260950 3187 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 11:06:34.814745518 +0000 UTC Dec 03 14:07:52.261312 master-0 kubenswrapper[3187]: I1203 14:07:52.260994 3187 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h58m42.553754016s for next certificate rotation Dec 03 14:07:52.261312 master-0 kubenswrapper[3187]: I1203 14:07:52.261172 3187 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:07:52.261312 master-0 kubenswrapper[3187]: I1203 14:07:52.261182 3187 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:07:52.261312 master-0 kubenswrapper[3187]: I1203 14:07:52.261300 3187 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:07:52.261569 master-0 kubenswrapper[3187]: E1203 14:07:52.261516 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:07:52.262894 master-0 kubenswrapper[3187]: W1203 14:07:52.262686 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:52.262894 master-0 kubenswrapper[3187]: E1203 14:07:52.262756 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:52.263619 master-0 kubenswrapper[3187]: E1203 14:07:52.263590 3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 14:07:52.267850 master-0 kubenswrapper[3187]: I1203 14:07:52.266962 3187 factory.go:55] Registering systemd factory Dec 03 14:07:52.267850 master-0 kubenswrapper[3187]: I1203 14:07:52.267203 3187 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:07:52.269187 master-0 kubenswrapper[3187]: I1203 14:07:52.269123 3187 factory.go:153] Registering CRI-O factory Dec 03 14:07:52.269263 master-0 kubenswrapper[3187]: I1203 14:07:52.269189 3187 factory.go:221] Registration of the crio container factory successfully Dec 03 14:07:52.269510 master-0 kubenswrapper[3187]: I1203 14:07:52.269486 3187 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:07:52.269560 master-0 kubenswrapper[3187]: I1203 14:07:52.269541 3187 factory.go:103] Registering Raw factory Dec 03 14:07:52.269599 master-0 kubenswrapper[3187]: I1203 14:07:52.269564 3187 manager.go:1196] Started watching for new ooms in manager Dec 03 14:07:52.272690 master-0 kubenswrapper[3187]: I1203 14:07:52.272493 3187 manager.go:319] Starting recovery of all containers Dec 03 14:07:52.280477 master-0 kubenswrapper[3187]: I1203 14:07:52.280403 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 14:07:52.280477 master-0 kubenswrapper[3187]: I1203 14:07:52.280475 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88" seLinuxMountContext="" Dec 03 14:07:52.280477 master-0 kubenswrapper[3187]: I1203 14:07:52.280486 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280495 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280504 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280512 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280521 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280529 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280540 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280548 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280556 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b02244d0-f4ef-4702-950d-9e3fb5ced128" volumeName="kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280565 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280573 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280585 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280598 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280609 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280616 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280626 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280634 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx" seLinuxMountContext="" Dec 03 14:07:52.280635 master-0 kubenswrapper[3187]: I1203 14:07:52.280643 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280653 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280662 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280670 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280678 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280686 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280697 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280711 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280720 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280728 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280774 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280784 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280794 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280803 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280813 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280821 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280831 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280839 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280849 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280858 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280866 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities" seLinuxMountContext="" Dec 03 14:07:52.281093 master-0 kubenswrapper[3187]: I1203 14:07:52.280874 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280882 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280890 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280898 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280907 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280915 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280923 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280931 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280940 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280949 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280957 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280966 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280977 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280985 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.280994 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281003 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281011 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281020 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281038 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281047 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281056 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:07:52.281788 master-0 kubenswrapper[3187]: I1203 14:07:52.281064 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" volumeName="kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281071 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281079 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281087 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281097 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281105 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281113 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281122 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281132 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281139 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281147 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281156 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0a2889-39a5-471e-bd46-958e2f8eacaa" volumeName="kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281164 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281173 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281181 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281188 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281196 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281205 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281213 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281229 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281238 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs" seLinuxMountContext="" Dec 03 14:07:52.282406 master-0 kubenswrapper[3187]: I1203 14:07:52.281246 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281261 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281268 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281277 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281285 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281293 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281301 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281309 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281318 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281325 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281333 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281342 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281350 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281358 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281366 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281377 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281386 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281394 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281403 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281432 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281475 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client" seLinuxMountContext="" Dec 03 14:07:52.283187 master-0 kubenswrapper[3187]: I1203 14:07:52.281491 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281507 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281517 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281530 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281539 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281548 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281557 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281565 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281575 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281588 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281636 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281650 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281661 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281669 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281677 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281687 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281699 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281709 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281753 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281934 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281943 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8" seLinuxMountContext="" Dec 03 14:07:52.283761 master-0 kubenswrapper[3187]: I1203 14:07:52.281952 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" volumeName="kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.281961 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282015 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282025 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282035 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282089 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282101 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282112 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282127 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282138 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42c95e54-b4ba-4b19-a97c-abcec840ac5d" volumeName="kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282149 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282195 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282211 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282237 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282246 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282255 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282298 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282307 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282316 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282325 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282333 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.284293 master-0 kubenswrapper[3187]: I1203 14:07:52.282356 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282365 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282373 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282389 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282470 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282482 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282492 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f723d97-5c65-4ae7-9085-26db8b4f2f52" volumeName="kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282501 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282534 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282573 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282582 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282590 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282598 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282606 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282615 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282623 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282649 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282659 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282667 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282691 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282731 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:07:52.284867 master-0 kubenswrapper[3187]: I1203 14:07:52.282803 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282812 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282820 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282848 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282857 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282865 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282875 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282916 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282926 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282934 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282943 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282969 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282977 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282985 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.282994 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283003 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283011 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283019 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283028 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283050 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283111 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets" seLinuxMountContext="" Dec 03 14:07:52.285381 master-0 kubenswrapper[3187]: I1203 14:07:52.283119 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283128 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283136 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283144 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283152 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283160 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283227 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283238 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283263 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283271 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283280 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283290 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283298 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283306 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283329 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283337 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283345 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283353 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283361 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283370 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283380 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:07:52.286050 master-0 kubenswrapper[3187]: I1203 14:07:52.283392 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283433 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283472 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283497 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283506 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283516 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283527 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283537 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283549 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283578 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283639 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283649 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283667 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283722 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283733 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283743 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283755 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283781 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283790 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283822 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" volumeName="kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283835 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.286846 master-0 kubenswrapper[3187]: I1203 14:07:52.283843 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283852 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283861 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283874 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283898 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283907 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283915 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283925 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283934 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283943 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283953 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283961 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.283998 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284011 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284038 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284048 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284061 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284070 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284111 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284155 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284212 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.287624 master-0 kubenswrapper[3187]: I1203 14:07:52.284283 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284293 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284334 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284344 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284352 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284361 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284370 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284396 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284404 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284458 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284466 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284474 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284482 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284490 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284498 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284524 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284604 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284616 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284627 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284677 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284688 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities" seLinuxMountContext="" Dec 03 14:07:52.288388 master-0 kubenswrapper[3187]: I1203 14:07:52.284695 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" volumeName="kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284703 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284737 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284745 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284805 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284814 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284855 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284871 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284921 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284966 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.284991 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285032 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285040 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285048 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285059 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285105 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285118 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285132 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285174 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285185 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285226 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.289070 master-0 kubenswrapper[3187]: I1203 14:07:52.285234 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285242 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285255 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285268 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285276 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285401 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285411 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285435 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285443 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285451 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285463 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285472 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285480 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285588 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285597 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285630 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285639 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285647 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285682 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285713 3187 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285722 3187 reconstruct.go:97] "Volume reconstruction finished" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.285729 3187 reconciler.go:26] "Reconciler: start to sync state" Dec 03 14:07:52.289676 master-0 kubenswrapper[3187]: I1203 14:07:52.288182 3187 manager.go:324] Recovery completed Dec 03 14:07:52.298412 master-0 kubenswrapper[3187]: I1203 14:07:52.298351 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.299834 master-0 kubenswrapper[3187]: I1203 14:07:52.299768 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.299887 master-0 kubenswrapper[3187]: I1203 14:07:52.299843 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.299887 master-0 kubenswrapper[3187]: I1203 14:07:52.299854 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.301023 master-0 kubenswrapper[3187]: I1203 14:07:52.300987 3187 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 14:07:52.301023 master-0 kubenswrapper[3187]: I1203 14:07:52.301011 3187 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 14:07:52.301104 master-0 kubenswrapper[3187]: I1203 14:07:52.301038 3187 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:07:52.306885 master-0 kubenswrapper[3187]: I1203 14:07:52.306843 3187 policy_none.go:49] "None policy: Start" Dec 03 14:07:52.308344 master-0 kubenswrapper[3187]: I1203 14:07:52.308306 3187 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 14:07:52.308399 master-0 kubenswrapper[3187]: I1203 14:07:52.308351 3187 state_mem.go:35] "Initializing new in-memory state store" Dec 03 14:07:52.361686 master-0 kubenswrapper[3187]: E1203 14:07:52.361635 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:07:52.378912 master-0 kubenswrapper[3187]: I1203 14:07:52.378859 3187 manager.go:334] "Starting Device Plugin manager" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.379103 3187 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.379121 3187 server.go:79] "Starting device plugin registration server" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.379596 3187 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.379611 3187 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.381579 3187 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.381781 3187 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: I1203 14:07:52.381789 3187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 14:07:52.437078 master-0 kubenswrapper[3187]: E1203 14:07:52.386850 3187 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:07:52.438654 master-0 kubenswrapper[3187]: I1203 14:07:52.438589 3187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 14:07:52.439958 master-0 kubenswrapper[3187]: I1203 14:07:52.439924 3187 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 14:07:52.440029 master-0 kubenswrapper[3187]: I1203 14:07:52.439982 3187 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 14:07:52.440029 master-0 kubenswrapper[3187]: I1203 14:07:52.440019 3187 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 14:07:52.440115 master-0 kubenswrapper[3187]: E1203 14:07:52.440075 3187 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 03 14:07:52.441582 master-0 kubenswrapper[3187]: W1203 14:07:52.441516 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:52.441661 master-0 kubenswrapper[3187]: E1203 14:07:52.441587 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:52.466195 master-0 kubenswrapper[3187]: E1203 14:07:52.466126 3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 14:07:52.480495 master-0 kubenswrapper[3187]: I1203 14:07:52.480411 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.481384 master-0 kubenswrapper[3187]: I1203 14:07:52.481349 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.481465 master-0 kubenswrapper[3187]: I1203 14:07:52.481390 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.481465 master-0 kubenswrapper[3187]: I1203 14:07:52.481402 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.481465 master-0 kubenswrapper[3187]: I1203 14:07:52.481450 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:07:52.482229 master-0 kubenswrapper[3187]: E1203 14:07:52.482184 3187 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:07:52.541320 master-0 kubenswrapper[3187]: I1203 14:07:52.541212 3187 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Dec 03 14:07:52.541457 master-0 kubenswrapper[3187]: I1203 14:07:52.541384 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.542804 master-0 kubenswrapper[3187]: I1203 14:07:52.542758 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.542804 master-0 kubenswrapper[3187]: I1203 14:07:52.542801 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.542903 master-0 kubenswrapper[3187]: I1203 14:07:52.542814 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.542934 master-0 kubenswrapper[3187]: I1203 14:07:52.542921 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.543327 master-0 kubenswrapper[3187]: I1203 14:07:52.543289 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.543391 master-0 kubenswrapper[3187]: I1203 14:07:52.543373 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.543650 master-0 kubenswrapper[3187]: I1203 14:07:52.543621 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.543650 master-0 kubenswrapper[3187]: I1203 14:07:52.543645 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.543650 master-0 kubenswrapper[3187]: I1203 14:07:52.543652 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.543838 master-0 kubenswrapper[3187]: I1203 14:07:52.543799 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.543950 master-0 kubenswrapper[3187]: I1203 14:07:52.543921 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.543985 master-0 kubenswrapper[3187]: I1203 14:07:52.543966 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.544284 master-0 kubenswrapper[3187]: I1203 14:07:52.544259 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.544318 master-0 kubenswrapper[3187]: I1203 14:07:52.544285 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.544318 master-0 kubenswrapper[3187]: I1203 14:07:52.544296 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.544384 master-0 kubenswrapper[3187]: I1203 14:07:52.544316 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.544384 master-0 kubenswrapper[3187]: I1203 14:07:52.544332 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.544384 master-0 kubenswrapper[3187]: I1203 14:07:52.544341 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.544498 master-0 kubenswrapper[3187]: I1203 14:07:52.544443 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.544498 master-0 kubenswrapper[3187]: I1203 14:07:52.544467 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.544498 master-0 kubenswrapper[3187]: I1203 14:07:52.544484 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.544498 master-0 kubenswrapper[3187]: I1203 14:07:52.544493 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.544958 master-0 kubenswrapper[3187]: I1203 14:07:52.544908 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.545025 master-0 kubenswrapper[3187]: I1203 14:07:52.545000 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.545457 master-0 kubenswrapper[3187]: I1203 14:07:52.545433 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.545499 master-0 kubenswrapper[3187]: I1203 14:07:52.545460 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.545499 master-0 kubenswrapper[3187]: I1203 14:07:52.545468 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.545657 master-0 kubenswrapper[3187]: I1203 14:07:52.545633 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.545977 master-0 kubenswrapper[3187]: I1203 14:07:52.545936 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.546025 master-0 kubenswrapper[3187]: I1203 14:07:52.546012 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.546148 master-0 kubenswrapper[3187]: I1203 14:07:52.546119 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.546184 master-0 kubenswrapper[3187]: I1203 14:07:52.546152 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.546184 master-0 kubenswrapper[3187]: I1203 14:07:52.546160 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.546184 master-0 kubenswrapper[3187]: I1203 14:07:52.546166 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.546268 master-0 kubenswrapper[3187]: I1203 14:07:52.546177 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.546301 master-0 kubenswrapper[3187]: I1203 14:07:52.546269 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.546384 master-0 kubenswrapper[3187]: I1203 14:07:52.546361 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.546511 master-0 kubenswrapper[3187]: I1203 14:07:52.546472 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.546511 master-0 kubenswrapper[3187]: I1203 14:07:52.546502 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.547002 master-0 kubenswrapper[3187]: I1203 14:07:52.546973 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.547039 master-0 kubenswrapper[3187]: I1203 14:07:52.547008 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.547039 master-0 kubenswrapper[3187]: I1203 14:07:52.547018 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.547185 master-0 kubenswrapper[3187]: I1203 14:07:52.547156 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.547185 master-0 kubenswrapper[3187]: I1203 14:07:52.547179 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.547260 master-0 kubenswrapper[3187]: I1203 14:07:52.547205 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.547291 master-0 kubenswrapper[3187]: I1203 14:07:52.547264 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.547291 master-0 kubenswrapper[3187]: I1203 14:07:52.547285 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.547492 master-0 kubenswrapper[3187]: I1203 14:07:52.547461 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.547557 master-0 kubenswrapper[3187]: I1203 14:07:52.547497 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.547557 master-0 kubenswrapper[3187]: I1203 14:07:52.547509 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.547827 master-0 kubenswrapper[3187]: I1203 14:07:52.547809 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.547871 master-0 kubenswrapper[3187]: I1203 14:07:52.547835 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.547871 master-0 kubenswrapper[3187]: I1203 14:07:52.547846 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.589193 master-0 kubenswrapper[3187]: I1203 14:07:52.589148 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.589193 master-0 kubenswrapper[3187]: I1203 14:07:52.589188 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.589295 master-0 kubenswrapper[3187]: I1203 14:07:52.589209 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.589295 master-0 kubenswrapper[3187]: I1203 14:07:52.589229 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.589295 master-0 kubenswrapper[3187]: I1203 14:07:52.589248 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.589295 master-0 kubenswrapper[3187]: I1203 14:07:52.589266 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.589295 master-0 kubenswrapper[3187]: I1203 14:07:52.589283 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.589467 master-0 kubenswrapper[3187]: I1203 14:07:52.589305 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589467 master-0 kubenswrapper[3187]: I1203 14:07:52.589374 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589467 master-0 kubenswrapper[3187]: I1203 14:07:52.589396 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.589554 master-0 kubenswrapper[3187]: I1203 14:07:52.589493 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.589554 master-0 kubenswrapper[3187]: I1203 14:07:52.589540 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.589616 master-0 kubenswrapper[3187]: I1203 14:07:52.589560 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589616 master-0 kubenswrapper[3187]: I1203 14:07:52.589577 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589616 master-0 kubenswrapper[3187]: I1203 14:07:52.589600 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589710 master-0 kubenswrapper[3187]: I1203 14:07:52.589615 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.589710 master-0 kubenswrapper[3187]: I1203 14:07:52.589688 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.589767 master-0 kubenswrapper[3187]: I1203 14:07:52.589733 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.589838 master-0 kubenswrapper[3187]: I1203 14:07:52.589770 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.589838 master-0 kubenswrapper[3187]: I1203 14:07:52.589815 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.589931 master-0 kubenswrapper[3187]: I1203 14:07:52.589880 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.589981 master-0 kubenswrapper[3187]: I1203 14:07:52.589928 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.589981 master-0 kubenswrapper[3187]: I1203 14:07:52.589956 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.682945 master-0 kubenswrapper[3187]: I1203 14:07:52.682834 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:52.685012 master-0 kubenswrapper[3187]: I1203 14:07:52.684960 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:52.685131 master-0 kubenswrapper[3187]: I1203 14:07:52.685030 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:52.685131 master-0 kubenswrapper[3187]: I1203 14:07:52.685056 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:52.685131 master-0 kubenswrapper[3187]: I1203 14:07:52.685104 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:07:52.686455 master-0 kubenswrapper[3187]: E1203 14:07:52.686377 3187 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:07:52.691717 master-0 kubenswrapper[3187]: I1203 14:07:52.691672 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.691778 master-0 kubenswrapper[3187]: I1203 14:07:52.691729 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.691820 master-0 kubenswrapper[3187]: I1203 14:07:52.691772 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.691820 master-0 kubenswrapper[3187]: I1203 14:07:52.691807 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.691880 master-0 kubenswrapper[3187]: I1203 14:07:52.691838 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.691880 master-0 kubenswrapper[3187]: I1203 14:07:52.691857 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691902 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691924 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691939 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691870 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691989 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691995 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.692023 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.691875 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.692042 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.692067 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692376 master-0 kubenswrapper[3187]: I1203 14:07:52.692360 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692438 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692478 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692547 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692592 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692602 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692642 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692654 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692671 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692701 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.692708 master-0 kubenswrapper[3187]: I1203 14:07:52.692706 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692712 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692729 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692746 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692761 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692559 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692793 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692841 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692854 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692846 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692845 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692890 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692869 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692867 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692950 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.692985 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.693002 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.693019 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.693376 master-0 kubenswrapper[3187]: I1203 14:07:52.693045 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.693945 master-0 kubenswrapper[3187]: I1203 14:07:52.693122 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.867862 master-0 kubenswrapper[3187]: E1203 14:07:52.867782 3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 14:07:52.882468 master-0 kubenswrapper[3187]: I1203 14:07:52.882395 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:07:52.898889 master-0 kubenswrapper[3187]: I1203 14:07:52.898853 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:07:52.906581 master-0 kubenswrapper[3187]: W1203 14:07:52.906542 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2fa610bb2a39c39fcdd00db03a511a.slice/crio-80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e WatchSource:0}: Error finding container 80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e: Status 404 returned error can't find the container with id 80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e Dec 03 14:07:52.908774 master-0 kubenswrapper[3187]: W1203 14:07:52.908747 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb495b0c38f2c54e7cc46282c5f92aab5.slice/crio-fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6 WatchSource:0}: Error finding container fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6: Status 404 returned error can't find the container with id fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6 Dec 03 14:07:52.920253 master-0 kubenswrapper[3187]: I1203 14:07:52.920208 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:07:52.927411 master-0 kubenswrapper[3187]: I1203 14:07:52.927143 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:52.934662 master-0 kubenswrapper[3187]: W1203 14:07:52.934600 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf07eb54db570834b7c9a90b6b07403.slice/crio-ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e WatchSource:0}: Error finding container ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e: Status 404 returned error can't find the container with id ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e Dec 03 14:07:52.939892 master-0 kubenswrapper[3187]: I1203 14:07:52.939858 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:07:52.941373 master-0 kubenswrapper[3187]: W1203 14:07:52.941207 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5aa2d6b41f5e21a89224256dc48af14.slice/crio-bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a WatchSource:0}: Error finding container bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a: Status 404 returned error can't find the container with id bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a Dec 03 14:07:52.947973 master-0 kubenswrapper[3187]: I1203 14:07:52.947930 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:52.950086 master-0 kubenswrapper[3187]: W1203 14:07:52.950041 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc98a8d85d3901d33f6fe192bdc7172aa.slice/crio-f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251 WatchSource:0}: Error finding container f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251: Status 404 returned error can't find the container with id f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251 Dec 03 14:07:52.965944 master-0 kubenswrapper[3187]: W1203 14:07:52.965816 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bce50c457ac1f4721bc81a570dd238a.slice/crio-b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8 WatchSource:0}: Error finding container b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8: Status 404 returned error can't find the container with id b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8 Dec 03 14:07:53.087294 master-0 kubenswrapper[3187]: I1203 14:07:53.087075 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.089196 master-0 kubenswrapper[3187]: I1203 14:07:53.089106 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.089196 master-0 kubenswrapper[3187]: I1203 14:07:53.089188 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.089379 master-0 kubenswrapper[3187]: I1203 14:07:53.089211 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.089379 master-0 kubenswrapper[3187]: I1203 14:07:53.089252 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:07:53.090960 master-0 kubenswrapper[3187]: E1203 14:07:53.090886 3187 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:07:53.259659 master-0 kubenswrapper[3187]: I1203 14:07:53.259284 3187 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:53.445455 master-0 kubenswrapper[3187]: I1203 14:07:53.445380 3187 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" exitCode=0 Dec 03 14:07:53.445861 master-0 kubenswrapper[3187]: I1203 14:07:53.445653 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7"} Dec 03 14:07:53.445861 master-0 kubenswrapper[3187]: I1203 14:07:53.445750 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e"} Dec 03 14:07:53.445861 master-0 kubenswrapper[3187]: I1203 14:07:53.445854 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.448175 master-0 kubenswrapper[3187]: I1203 14:07:53.447148 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.448175 master-0 kubenswrapper[3187]: I1203 14:07:53.447201 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.448175 master-0 kubenswrapper[3187]: I1203 14:07:53.447216 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.448542 master-0 kubenswrapper[3187]: I1203 14:07:53.448502 3187 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864" exitCode=0 Dec 03 14:07:53.448596 master-0 kubenswrapper[3187]: I1203 14:07:53.448576 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864"} Dec 03 14:07:53.448883 master-0 kubenswrapper[3187]: I1203 14:07:53.448601 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6"} Dec 03 14:07:53.448883 master-0 kubenswrapper[3187]: I1203 14:07:53.448857 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.450021 master-0 kubenswrapper[3187]: I1203 14:07:53.449973 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.450123 master-0 kubenswrapper[3187]: I1203 14:07:53.450039 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.450123 master-0 kubenswrapper[3187]: I1203 14:07:53.450070 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.452249 master-0 kubenswrapper[3187]: I1203 14:07:53.452212 3187 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f" exitCode=0 Dec 03 14:07:53.452322 master-0 kubenswrapper[3187]: I1203 14:07:53.452286 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f"} Dec 03 14:07:53.452402 master-0 kubenswrapper[3187]: I1203 14:07:53.452334 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e"} Dec 03 14:07:53.452485 master-0 kubenswrapper[3187]: I1203 14:07:53.452466 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.453333 master-0 kubenswrapper[3187]: I1203 14:07:53.453242 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.453333 master-0 kubenswrapper[3187]: I1203 14:07:53.453274 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.453333 master-0 kubenswrapper[3187]: I1203 14:07:53.453328 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.454183 master-0 kubenswrapper[3187]: I1203 14:07:53.454156 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060"} Dec 03 14:07:53.454226 master-0 kubenswrapper[3187]: I1203 14:07:53.454183 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8"} Dec 03 14:07:53.456218 master-0 kubenswrapper[3187]: I1203 14:07:53.456172 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e"} Dec 03 14:07:53.456296 master-0 kubenswrapper[3187]: I1203 14:07:53.456222 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251"} Dec 03 14:07:53.456296 master-0 kubenswrapper[3187]: I1203 14:07:53.456292 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.459567 master-0 kubenswrapper[3187]: I1203 14:07:53.459475 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.459567 master-0 kubenswrapper[3187]: I1203 14:07:53.459541 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.459567 master-0 kubenswrapper[3187]: I1203 14:07:53.459557 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.462007 master-0 kubenswrapper[3187]: I1203 14:07:53.461802 3187 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="cc112e6842d5a1677f57d5cb903a1e5d6f4646550a794d787fb3ec9cc8aeb9a3" exitCode=0 Dec 03 14:07:53.462007 master-0 kubenswrapper[3187]: I1203 14:07:53.461850 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerDied","Data":"cc112e6842d5a1677f57d5cb903a1e5d6f4646550a794d787fb3ec9cc8aeb9a3"} Dec 03 14:07:53.462007 master-0 kubenswrapper[3187]: I1203 14:07:53.461884 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a"} Dec 03 14:07:53.462007 master-0 kubenswrapper[3187]: I1203 14:07:53.462000 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.463281 master-0 kubenswrapper[3187]: I1203 14:07:53.462882 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.463281 master-0 kubenswrapper[3187]: I1203 14:07:53.462917 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.463281 master-0 kubenswrapper[3187]: I1203 14:07:53.462927 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.464657 master-0 kubenswrapper[3187]: I1203 14:07:53.464541 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.465262 master-0 kubenswrapper[3187]: I1203 14:07:53.465232 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.465262 master-0 kubenswrapper[3187]: I1203 14:07:53.465260 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.465381 master-0 kubenswrapper[3187]: I1203 14:07:53.465273 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.572168 master-0 kubenswrapper[3187]: W1203 14:07:53.572022 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:53.572168 master-0 kubenswrapper[3187]: E1203 14:07:53.572133 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:53.669109 master-0 kubenswrapper[3187]: E1203 14:07:53.669039 3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 14:07:53.695320 master-0 kubenswrapper[3187]: W1203 14:07:53.695228 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:53.695530 master-0 kubenswrapper[3187]: E1203 14:07:53.695320 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:53.753199 master-0 kubenswrapper[3187]: W1203 14:07:53.751388 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:53.753199 master-0 kubenswrapper[3187]: E1203 14:07:53.751954 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:53.777134 master-0 kubenswrapper[3187]: W1203 14:07:53.776998 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:53.777134 master-0 kubenswrapper[3187]: E1203 14:07:53.777103 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:07:53.891362 master-0 kubenswrapper[3187]: I1203 14:07:53.891125 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:53.892710 master-0 kubenswrapper[3187]: I1203 14:07:53.892673 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:53.892710 master-0 kubenswrapper[3187]: I1203 14:07:53.892716 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:53.892826 master-0 kubenswrapper[3187]: I1203 14:07:53.892726 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:53.892826 master-0 kubenswrapper[3187]: I1203 14:07:53.892751 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:07:53.894321 master-0 kubenswrapper[3187]: E1203 14:07:53.894275 3187 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:07:54.259152 master-0 kubenswrapper[3187]: I1203 14:07:54.258988 3187 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:07:54.467168 master-0 kubenswrapper[3187]: I1203 14:07:54.467117 3187 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" exitCode=0 Dec 03 14:07:54.467937 master-0 kubenswrapper[3187]: I1203 14:07:54.467191 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1"} Dec 03 14:07:54.467937 master-0 kubenswrapper[3187]: I1203 14:07:54.467341 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:54.468339 master-0 kubenswrapper[3187]: I1203 14:07:54.468320 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:54.468394 master-0 kubenswrapper[3187]: I1203 14:07:54.468350 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:54.468394 master-0 kubenswrapper[3187]: I1203 14:07:54.468362 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:54.471580 master-0 kubenswrapper[3187]: I1203 14:07:54.471531 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae"} Dec 03 14:07:54.471706 master-0 kubenswrapper[3187]: I1203 14:07:54.471679 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:54.472514 master-0 kubenswrapper[3187]: I1203 14:07:54.472485 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:54.472597 master-0 kubenswrapper[3187]: I1203 14:07:54.472520 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:54.472597 master-0 kubenswrapper[3187]: I1203 14:07:54.472531 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:54.475238 master-0 kubenswrapper[3187]: I1203 14:07:54.475188 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1"} Dec 03 14:07:54.475344 master-0 kubenswrapper[3187]: I1203 14:07:54.475254 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12"} Dec 03 14:07:54.478442 master-0 kubenswrapper[3187]: I1203 14:07:54.478364 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68"} Dec 03 14:07:54.478561 master-0 kubenswrapper[3187]: I1203 14:07:54.478404 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:54.479744 master-0 kubenswrapper[3187]: I1203 14:07:54.479713 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:54.479850 master-0 kubenswrapper[3187]: I1203 14:07:54.479761 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:54.479850 master-0 kubenswrapper[3187]: I1203 14:07:54.479778 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:54.481661 master-0 kubenswrapper[3187]: I1203 14:07:54.481626 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"353ef5bad57ce46db98c0549f921ee8f0ee580567553f3ba9950d113638096f2"} Dec 03 14:07:54.481661 master-0 kubenswrapper[3187]: I1203 14:07:54.481660 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"1676af95112121a9e343fac781d61b54d4f18bb5d03944dc4409d844ba4c9c5e"} Dec 03 14:07:55.488207 master-0 kubenswrapper[3187]: I1203 14:07:55.488065 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"749d4a97321672e94f0f4d6c55d7fa485dfbd3bbe5480f2c579faa82f311605b"} Dec 03 14:07:55.488207 master-0 kubenswrapper[3187]: I1203 14:07:55.488120 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"aa440bd50b25afd3bbdcd911eb6ddd2cb8d5f29270fc9664a389f142c4f8cf24"} Dec 03 14:07:55.488207 master-0 kubenswrapper[3187]: I1203 14:07:55.488135 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"e4e74143105a836ab029b335e356e20dcf63f1dfd4df0559287d53a803dfe9b1"} Dec 03 14:07:55.489158 master-0 kubenswrapper[3187]: I1203 14:07:55.488242 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:55.489649 master-0 kubenswrapper[3187]: I1203 14:07:55.489626 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:55.489728 master-0 kubenswrapper[3187]: I1203 14:07:55.489659 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:55.489728 master-0 kubenswrapper[3187]: I1203 14:07:55.489668 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:55.491133 master-0 kubenswrapper[3187]: I1203 14:07:55.491107 3187 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" exitCode=0 Dec 03 14:07:55.491218 master-0 kubenswrapper[3187]: I1203 14:07:55.491164 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045"} Dec 03 14:07:55.491304 master-0 kubenswrapper[3187]: I1203 14:07:55.491286 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:55.494068 master-0 kubenswrapper[3187]: I1203 14:07:55.494035 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:55.494068 master-0 kubenswrapper[3187]: I1203 14:07:55.494071 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:55.494205 master-0 kubenswrapper[3187]: I1203 14:07:55.494082 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:55.494992 master-0 kubenswrapper[3187]: I1203 14:07:55.494969 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:55.496215 master-0 kubenswrapper[3187]: I1203 14:07:55.496181 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:55.496286 master-0 kubenswrapper[3187]: I1203 14:07:55.496217 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:55.496286 master-0 kubenswrapper[3187]: I1203 14:07:55.496231 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:55.496286 master-0 kubenswrapper[3187]: I1203 14:07:55.496252 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:07:55.499079 master-0 kubenswrapper[3187]: I1203 14:07:55.499047 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:55.499304 master-0 kubenswrapper[3187]: I1203 14:07:55.499247 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3"} Dec 03 14:07:55.499364 master-0 kubenswrapper[3187]: I1203 14:07:55.499346 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:55.500230 master-0 kubenswrapper[3187]: I1203 14:07:55.500211 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:55.500399 master-0 kubenswrapper[3187]: I1203 14:07:55.500387 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:55.500509 master-0 kubenswrapper[3187]: I1203 14:07:55.500496 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:55.500715 master-0 kubenswrapper[3187]: I1203 14:07:55.500299 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:55.500760 master-0 kubenswrapper[3187]: I1203 14:07:55.500728 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:55.500760 master-0 kubenswrapper[3187]: I1203 14:07:55.500742 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510666 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a"} Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510729 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3"} Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510746 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367"} Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510758 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be"} Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510763 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510783 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510807 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:56.511378 master-0 kubenswrapper[3187]: I1203 14:07:56.510835 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:56.512710 master-0 kubenswrapper[3187]: I1203 14:07:56.512684 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:56.512774 master-0 kubenswrapper[3187]: I1203 14:07:56.512707 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:56.512774 master-0 kubenswrapper[3187]: I1203 14:07:56.512718 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:56.512832 master-0 kubenswrapper[3187]: I1203 14:07:56.512776 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:56.512832 master-0 kubenswrapper[3187]: I1203 14:07:56.512738 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:56.512832 master-0 kubenswrapper[3187]: I1203 14:07:56.512810 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:56.753457 master-0 kubenswrapper[3187]: I1203 14:07:56.753148 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:57.518869 master-0 kubenswrapper[3187]: I1203 14:07:57.518797 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82"} Dec 03 14:07:57.518869 master-0 kubenswrapper[3187]: I1203 14:07:57.518840 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:07:57.519899 master-0 kubenswrapper[3187]: I1203 14:07:57.518893 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:57.519899 master-0 kubenswrapper[3187]: I1203 14:07:57.519461 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:57.519899 master-0 kubenswrapper[3187]: I1203 14:07:57.519734 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:57.519899 master-0 kubenswrapper[3187]: I1203 14:07:57.519756 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:57.519899 master-0 kubenswrapper[3187]: I1203 14:07:57.519764 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:57.520509 master-0 kubenswrapper[3187]: I1203 14:07:57.520489 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:57.520509 master-0 kubenswrapper[3187]: I1203 14:07:57.520510 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:57.520870 master-0 kubenswrapper[3187]: I1203 14:07:57.520518 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:57.801793 master-0 kubenswrapper[3187]: I1203 14:07:57.801610 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:57.801793 master-0 kubenswrapper[3187]: I1203 14:07:57.801803 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:57.803080 master-0 kubenswrapper[3187]: I1203 14:07:57.803042 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:57.803080 master-0 kubenswrapper[3187]: I1203 14:07:57.803081 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:57.803227 master-0 kubenswrapper[3187]: I1203 14:07:57.803092 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:58.343621 master-0 kubenswrapper[3187]: I1203 14:07:58.343506 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:58.525133 master-0 kubenswrapper[3187]: I1203 14:07:58.522717 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:58.525133 master-0 kubenswrapper[3187]: I1203 14:07:58.522864 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:58.526238 master-0 kubenswrapper[3187]: I1203 14:07:58.526132 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:58.526238 master-0 kubenswrapper[3187]: I1203 14:07:58.526180 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:58.526238 master-0 kubenswrapper[3187]: I1203 14:07:58.526191 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:58.526745 master-0 kubenswrapper[3187]: I1203 14:07:58.526668 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:58.526870 master-0 kubenswrapper[3187]: I1203 14:07:58.526760 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:58.526870 master-0 kubenswrapper[3187]: I1203 14:07:58.526789 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:59.084082 master-0 kubenswrapper[3187]: I1203 14:07:59.083884 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:07:59.524908 master-0 kubenswrapper[3187]: I1203 14:07:59.524758 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:59.525743 master-0 kubenswrapper[3187]: I1203 14:07:59.525712 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:59.525743 master-0 kubenswrapper[3187]: I1203 14:07:59.525744 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:59.526158 master-0 kubenswrapper[3187]: I1203 14:07:59.525754 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:59.830789 master-0 kubenswrapper[3187]: I1203 14:07:59.830596 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:07:59.831002 master-0 kubenswrapper[3187]: I1203 14:07:59.830852 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:07:59.832105 master-0 kubenswrapper[3187]: I1203 14:07:59.832065 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:07:59.832105 master-0 kubenswrapper[3187]: I1203 14:07:59.832107 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:07:59.832194 master-0 kubenswrapper[3187]: I1203 14:07:59.832116 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:07:59.920286 master-0 kubenswrapper[3187]: I1203 14:07:59.920175 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:00.497893 master-0 kubenswrapper[3187]: I1203 14:08:00.497752 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:00.498291 master-0 kubenswrapper[3187]: I1203 14:08:00.498013 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:00.499755 master-0 kubenswrapper[3187]: I1203 14:08:00.499700 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:00.499755 master-0 kubenswrapper[3187]: I1203 14:08:00.499735 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:00.499755 master-0 kubenswrapper[3187]: I1203 14:08:00.499746 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:00.527051 master-0 kubenswrapper[3187]: I1203 14:08:00.526986 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:00.527866 master-0 kubenswrapper[3187]: I1203 14:08:00.527834 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:00.527969 master-0 kubenswrapper[3187]: I1203 14:08:00.527872 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:00.527969 master-0 kubenswrapper[3187]: I1203 14:08:00.527882 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:00.535570 master-0 kubenswrapper[3187]: I1203 14:08:00.535517 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:00.535716 master-0 kubenswrapper[3187]: I1203 14:08:00.535665 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:00.536796 master-0 kubenswrapper[3187]: I1203 14:08:00.536765 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:00.536796 master-0 kubenswrapper[3187]: I1203 14:08:00.536796 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:00.536916 master-0 kubenswrapper[3187]: I1203 14:08:00.536807 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:00.743648 master-0 kubenswrapper[3187]: I1203 14:08:00.743540 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:00.743934 master-0 kubenswrapper[3187]: I1203 14:08:00.743780 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:00.745378 master-0 kubenswrapper[3187]: I1203 14:08:00.745326 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:00.745478 master-0 kubenswrapper[3187]: I1203 14:08:00.745382 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:00.745478 master-0 kubenswrapper[3187]: I1203 14:08:00.745396 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:00.802159 master-0 kubenswrapper[3187]: I1203 14:08:00.801853 3187 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:08:02.084508 master-0 kubenswrapper[3187]: I1203 14:08:02.084359 3187 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:08:02.387086 master-0 kubenswrapper[3187]: E1203 14:08:02.386958 3187 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:08:04.547810 master-0 kubenswrapper[3187]: I1203 14:08:04.547729 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:04.548502 master-0 kubenswrapper[3187]: I1203 14:08:04.547932 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:04.548840 master-0 kubenswrapper[3187]: I1203 14:08:04.548791 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:04.548911 master-0 kubenswrapper[3187]: I1203 14:08:04.548850 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:04.548911 master-0 kubenswrapper[3187]: I1203 14:08:04.548859 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:05.259234 master-0 kubenswrapper[3187]: I1203 14:08:05.259160 3187 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:08:05.269729 master-0 kubenswrapper[3187]: E1203 14:08:05.269662 3187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 03 14:08:05.498455 master-0 kubenswrapper[3187]: E1203 14:08:05.498223 3187 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="master-0" Dec 03 14:08:05.522559 master-0 kubenswrapper[3187]: W1203 14:08:05.522386 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:08:05.522875 master-0 kubenswrapper[3187]: I1203 14:08:05.522572 3187 trace.go:236] Trace[1747726190]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:07:55.519) (total time: 10002ms): Dec 03 14:08:05.522875 master-0 kubenswrapper[3187]: Trace[1747726190]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:08:05.522) Dec 03 14:08:05.522875 master-0 kubenswrapper[3187]: Trace[1747726190]: [10.002588654s] [10.002588654s] END Dec 03 14:08:05.522875 master-0 kubenswrapper[3187]: E1203 14:08:05.522616 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 14:08:05.540368 master-0 kubenswrapper[3187]: I1203 14:08:05.540316 3187 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060" exitCode=1 Dec 03 14:08:05.540514 master-0 kubenswrapper[3187]: I1203 14:08:05.540374 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060"} Dec 03 14:08:05.540563 master-0 kubenswrapper[3187]: I1203 14:08:05.540542 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:05.541206 master-0 kubenswrapper[3187]: I1203 14:08:05.541184 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:05.541313 master-0 kubenswrapper[3187]: I1203 14:08:05.541215 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:05.541313 master-0 kubenswrapper[3187]: I1203 14:08:05.541224 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:05.541711 master-0 kubenswrapper[3187]: I1203 14:08:05.541676 3187 scope.go:117] "RemoveContainer" containerID="1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060" Dec 03 14:08:05.872818 master-0 kubenswrapper[3187]: W1203 14:08:05.872650 3187 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:08:05.872818 master-0 kubenswrapper[3187]: I1203 14:08:05.872767 3187 trace.go:236] Trace[1266032818]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:07:55.870) (total time: 10002ms): Dec 03 14:08:05.872818 master-0 kubenswrapper[3187]: Trace[1266032818]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (14:08:05.872) Dec 03 14:08:05.872818 master-0 kubenswrapper[3187]: Trace[1266032818]: [10.002217564s] [10.002217564s] END Dec 03 14:08:05.872818 master-0 kubenswrapper[3187]: E1203 14:08:05.872792 3187 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 14:08:06.017652 master-0 kubenswrapper[3187]: I1203 14:08:06.017526 3187 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 14:08:06.017931 master-0 kubenswrapper[3187]: I1203 14:08:06.017896 3187 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 14:08:06.029702 master-0 kubenswrapper[3187]: I1203 14:08:06.029514 3187 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 03 14:08:06.029702 master-0 kubenswrapper[3187]: I1203 14:08:06.029620 3187 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 14:08:06.544798 master-0 kubenswrapper[3187]: I1203 14:08:06.544711 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7"} Dec 03 14:08:06.545124 master-0 kubenswrapper[3187]: I1203 14:08:06.544857 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:06.545553 master-0 kubenswrapper[3187]: I1203 14:08:06.545529 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:06.545553 master-0 kubenswrapper[3187]: I1203 14:08:06.545562 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:06.545759 master-0 kubenswrapper[3187]: I1203 14:08:06.545573 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:08.344280 master-0 kubenswrapper[3187]: I1203 14:08:08.344215 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:08.345490 master-0 kubenswrapper[3187]: I1203 14:08:08.344381 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:08.345490 master-0 kubenswrapper[3187]: I1203 14:08:08.345343 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:08.345490 master-0 kubenswrapper[3187]: I1203 14:08:08.345374 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:08.345490 master-0 kubenswrapper[3187]: I1203 14:08:08.345386 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:08.698815 master-0 kubenswrapper[3187]: I1203 14:08:08.698692 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:08.700304 master-0 kubenswrapper[3187]: I1203 14:08:08.700254 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:08.700304 master-0 kubenswrapper[3187]: I1203 14:08:08.700298 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:08.700471 master-0 kubenswrapper[3187]: I1203 14:08:08.700313 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:08.700471 master-0 kubenswrapper[3187]: I1203 14:08:08.700340 3187 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:08:09.084634 master-0 kubenswrapper[3187]: I1203 14:08:09.084414 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:09.084825 master-0 kubenswrapper[3187]: I1203 14:08:09.084663 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:09.085904 master-0 kubenswrapper[3187]: I1203 14:08:09.085824 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:09.085904 master-0 kubenswrapper[3187]: I1203 14:08:09.085898 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:09.086012 master-0 kubenswrapper[3187]: I1203 14:08:09.085924 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: I1203 14:08:09.929098 3187 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]log ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]etcd ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/priority-and-fairness-filter ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-apiextensions-informers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-apiextensions-controllers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/crd-informer-synced ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-system-namespaces-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/rbac/bootstrap-roles ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/bootstrap-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/start-kube-aggregator-informers ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-registration-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]autoregister-completion ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-openapi-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: livez check failed Dec 03 14:08:09.929253 master-0 kubenswrapper[3187]: I1203 14:08:09.929229 3187 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:10.803452 master-0 kubenswrapper[3187]: I1203 14:08:10.803285 3187 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:08:11.434281 master-0 kubenswrapper[3187]: I1203 14:08:11.434196 3187 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:11.981739 master-0 kubenswrapper[3187]: I1203 14:08:11.981674 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:11.981946 master-0 kubenswrapper[3187]: I1203 14:08:11.981816 3187 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:11.982948 master-0 kubenswrapper[3187]: I1203 14:08:11.982893 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:11.982948 master-0 kubenswrapper[3187]: I1203 14:08:11.982946 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:11.983069 master-0 kubenswrapper[3187]: I1203 14:08:11.982954 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:12.387370 master-0 kubenswrapper[3187]: E1203 14:08:12.387142 3187 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:08:13.784834 master-0 kubenswrapper[3187]: I1203 14:08:13.784369 3187 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 14:08:13.784834 master-0 kubenswrapper[3187]: I1203 14:08:13.784533 3187 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 14:08:13.784834 master-0 kubenswrapper[3187]: E1203 14:08:13.784560 3187 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Dec 03 14:08:13.789596 master-0 kubenswrapper[3187]: I1203 14:08:13.789531 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:08:13.789756 master-0 kubenswrapper[3187]: I1203 14:08:13.789596 3187 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:08:13Z","lastTransitionTime":"2025-12-03T14:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:08:13.800665 master-0 kubenswrapper[3187]: E1203 14:08:13.800575 3187 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"764a923e-eafb-47f4-8635-9cb972b9b445\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:13.804880 master-0 kubenswrapper[3187]: I1203 14:08:13.804804 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:08:13.804880 master-0 kubenswrapper[3187]: I1203 14:08:13.804847 3187 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:08:13Z","lastTransitionTime":"2025-12-03T14:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:08:13.813331 master-0 kubenswrapper[3187]: E1203 14:08:13.813258 3187 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"764a923e-eafb-47f4-8635-9cb972b9b445\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:13.817115 master-0 kubenswrapper[3187]: I1203 14:08:13.817090 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:08:13.817221 master-0 kubenswrapper[3187]: I1203 14:08:13.817125 3187 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:08:13Z","lastTransitionTime":"2025-12-03T14:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:08:13.825201 master-0 kubenswrapper[3187]: E1203 14:08:13.825083 3187 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"764a923e-eafb-47f4-8635-9cb972b9b445\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:13.829383 master-0 kubenswrapper[3187]: I1203 14:08:13.829346 3187 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:08:13.829553 master-0 kubenswrapper[3187]: I1203 14:08:13.829394 3187 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:08:13Z","lastTransitionTime":"2025-12-03T14:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:08:13.838386 master-0 kubenswrapper[3187]: E1203 14:08:13.838335 3187 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"764a923e-eafb-47f4-8635-9cb972b9b445\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:13.838386 master-0 kubenswrapper[3187]: E1203 14:08:13.838379 3187 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 14:08:13.838681 master-0 kubenswrapper[3187]: E1203 14:08:13.838410 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:13.938823 master-0 kubenswrapper[3187]: E1203 14:08:13.938746 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:14.039389 master-0 kubenswrapper[3187]: E1203 14:08:14.039324 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:14.139953 master-0 kubenswrapper[3187]: E1203 14:08:14.139826 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:14.240469 master-0 kubenswrapper[3187]: E1203 14:08:14.240335 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:14.341581 master-0 kubenswrapper[3187]: E1203 14:08:14.341381 3187 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:08:14.399116 master-0 kubenswrapper[3187]: I1203 14:08:14.399061 3187 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:14.574541 master-0 kubenswrapper[3187]: I1203 14:08:14.574468 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:14.588163 master-0 kubenswrapper[3187]: I1203 14:08:14.588100 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:14.754515 master-0 kubenswrapper[3187]: E1203 14:08:14.754407 3187 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:14.925840 master-0 kubenswrapper[3187]: I1203 14:08:14.925796 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:14.930889 master-0 kubenswrapper[3187]: I1203 14:08:14.930850 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:14.939806 master-0 kubenswrapper[3187]: E1203 14:08:14.939772 3187 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:15.263305 master-0 kubenswrapper[3187]: I1203 14:08:15.263231 3187 apiserver.go:52] "Watching apiserver" Dec 03 14:08:15.300587 master-0 kubenswrapper[3187]: I1203 14:08:15.299890 3187 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:08:15.302544 master-0 kubenswrapper[3187]: I1203 14:08:15.302373 3187 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq","openshift-controller-manager/controller-manager-78d987764b-xcs5w","openshift-kube-apiserver/kube-apiserver-master-0","openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j","openshift-multus/multus-kk4tm","openshift-kube-apiserver/installer-2-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-monitoring/alertmanager-main-0","openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l","openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl","openshift-ovn-kubernetes/ovnkube-node-txl6b","openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg","openshift-service-ca/service-ca-6b8bb995f7-b68p8","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-ingress-canary/ingress-canary-vkpv4","openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n","openshift-marketplace/community-operators-7fwtv","openshift-multus/multus-admission-controller-5bdcc987c4-x99xc","assisted-installer/assisted-installer-controller-stq5g","openshift-cluster-node-tuning-operator/tuned-7zkbg","openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h","openshift-console/console-c5d7cd7f9-2hp75","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-etcd/etcd-master-0","openshift-kube-apiserver/installer-4-master-0","openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg","openshift-marketplace/redhat-operators-6z4sc","openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-apiserver/apiserver-6985f84b49-v9vlg","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-console-operator/console-operator-77df56447c-vsrxx","openshift-dns/dns-default-5m4f8","openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-monitoring/prometheus-k8s-0","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-kube-apiserver/installer-1-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k","openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw","openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7","openshift-etcd/installer-1-master-0","openshift-monitoring/prometheus-operator-565bdcb8-477pk","openshift-monitoring/thanos-querier-cc996c4bd-j4hzr","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr","openshift-console/downloads-6f5db8559b-96ljh","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-marketplace/certified-operators-t8rt7","openshift-marketplace/redhat-marketplace-ddwmn","openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2","openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql","openshift-authentication/oauth-openshift-747bdb58b5-mn76f","openshift-catalogd/catalogd-controller-manager-754cfd84-qf898","openshift-ingress/router-default-54f97f57-rr9px","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-monitoring/node-exporter-b62gf","openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8","openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w","openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-network-operator/iptables-alerter-n24qb","openshift-console/console-648d88c756-vswh8","openshift-dns/node-resolver-4xlhs","openshift-machine-api/machine-api-operator-7486ff55f-wcnxg","openshift-machine-config-operator/machine-config-daemon-2ztl9","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx","openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz","openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-image-registry/node-ca-4p4zh","openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-multus/network-metrics-daemon-ch7xd","openshift-network-node-identity/network-node-identity-c8csx","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-insights/insights-operator-59d99f9b7b-74sss","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-target-pcchm","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg","openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29","openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r","openshift-machine-config-operator/machine-config-server-pvrfs","openshift-monitoring/metrics-server-555496955b-vpcbs","openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6"] Dec 03 14:08:15.302985 master-0 kubenswrapper[3187]: I1203 14:08:15.302887 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:15.302985 master-0 kubenswrapper[3187]: I1203 14:08:15.302948 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:15.302985 master-0 kubenswrapper[3187]: I1203 14:08:15.302974 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303058 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303179 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303221 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303370 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303439 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303469 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303501 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303511 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303541 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303593 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303632 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303684 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303684 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: E1203 14:08:15.303723 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303883 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.303996 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:15.304077 master-0 kubenswrapper[3187]: I1203 14:08:15.304050 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304175 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304236 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304287 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304324 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304354 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304499 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304573 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304616 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304654 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304789 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304846 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.304862 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.304945 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.305012 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: I1203 14:08:15.305063 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:15.305324 master-0 kubenswrapper[3187]: E1203 14:08:15.305110 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:15.306141 master-0 kubenswrapper[3187]: I1203 14:08:15.305559 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:15.306141 master-0 kubenswrapper[3187]: E1203 14:08:15.305600 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:15.306141 master-0 kubenswrapper[3187]: I1203 14:08:15.305648 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:15.306141 master-0 kubenswrapper[3187]: I1203 14:08:15.305956 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 14:08:15.306141 master-0 kubenswrapper[3187]: I1203 14:08:15.306034 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.306562 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.306796 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: E1203 14:08:15.306860 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.306963 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: E1203 14:08:15.306993 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307044 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307078 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307180 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307539 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307621 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:15.307671 master-0 kubenswrapper[3187]: I1203 14:08:15.307668 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:08:15.308216 master-0 kubenswrapper[3187]: E1203 14:08:15.307697 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:15.308216 master-0 kubenswrapper[3187]: I1203 14:08:15.307763 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.308585 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: E1203 14:08:15.308615 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: E1203 14:08:15.308647 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: E1203 14:08:15.308698 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309535 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309585 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309617 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309604 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309585 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309678 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:08:15.309722 master-0 kubenswrapper[3187]: I1203 14:08:15.309693 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: E1203 14:08:15.310531 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.309698 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: E1203 14:08:15.310641 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.309662 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.311362 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.311466 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.312617 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.312809 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.313203 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: E1203 14:08:15.313274 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:15.313635 master-0 kubenswrapper[3187]: I1203 14:08:15.313331 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:15.314545 master-0 kubenswrapper[3187]: I1203 14:08:15.314400 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:08:15.314545 master-0 kubenswrapper[3187]: E1203 14:08:15.314494 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:15.314669 master-0 kubenswrapper[3187]: I1203 14:08:15.314628 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314769 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314794 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314855 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314862 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314786 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:08:15.314965 master-0 kubenswrapper[3187]: I1203 14:08:15.314954 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:08:15.315221 master-0 kubenswrapper[3187]: I1203 14:08:15.314985 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:08:15.315221 master-0 kubenswrapper[3187]: I1203 14:08:15.315029 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:15.315221 master-0 kubenswrapper[3187]: I1203 14:08:15.315113 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:08:15.315448 master-0 kubenswrapper[3187]: I1203 14:08:15.315407 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:15.315521 master-0 kubenswrapper[3187]: I1203 14:08:15.315483 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:15.315521 master-0 kubenswrapper[3187]: I1203 14:08:15.315507 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 14:08:15.316044 master-0 kubenswrapper[3187]: I1203 14:08:15.315996 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.316479 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.316666 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: E1203 14:08:15.316719 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.316790 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.316865 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.316929 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: E1203 14:08:15.316966 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.317031 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: I1203 14:08:15.317080 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: E1203 14:08:15.317371 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:15.318254 master-0 kubenswrapper[3187]: E1203 14:08:15.317453 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318314 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318449 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.318529 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318551 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318639 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318684 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318705 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318730 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318794 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.318798 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.318946 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.318974 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319167 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319182 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.319226 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319306 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319425 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319440 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319515 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319544 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.319590 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.319601 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.319549 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320471 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320698 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320729 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.320759 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320829 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320847 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.320855 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320878 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320930 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.320954 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.320971 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: I1203 14:08:15.320988 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:15.321233 master-0 kubenswrapper[3187]: E1203 14:08:15.321022 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: I1203 14:08:15.321582 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: I1203 14:08:15.322326 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: E1203 14:08:15.322379 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: I1203 14:08:15.322582 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: E1203 14:08:15.322620 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: I1203 14:08:15.322654 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:08:15.322772 master-0 kubenswrapper[3187]: I1203 14:08:15.322714 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:15.323087 master-0 kubenswrapper[3187]: I1203 14:08:15.322721 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:15.323087 master-0 kubenswrapper[3187]: E1203 14:08:15.322846 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:15.323087 master-0 kubenswrapper[3187]: I1203 14:08:15.323078 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:08:15.323217 master-0 kubenswrapper[3187]: I1203 14:08:15.323148 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:08:15.323261 master-0 kubenswrapper[3187]: I1203 14:08:15.323214 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:08:15.323343 master-0 kubenswrapper[3187]: I1203 14:08:15.323320 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:15.323404 master-0 kubenswrapper[3187]: E1203 14:08:15.323372 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:15.323861 master-0 kubenswrapper[3187]: I1203 14:08:15.323828 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:15.324208 master-0 kubenswrapper[3187]: I1203 14:08:15.323940 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:15.324293 master-0 kubenswrapper[3187]: E1203 14:08:15.324232 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: I1203 14:08:15.324935 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: E1203 14:08:15.325001 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: I1203 14:08:15.324941 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: I1203 14:08:15.325876 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: I1203 14:08:15.326122 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:15.325013 master-0 kubenswrapper[3187]: E1203 14:08:15.326175 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:15.326892 master-0 kubenswrapper[3187]: I1203 14:08:15.326813 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:15.326892 master-0 kubenswrapper[3187]: E1203 14:08:15.326849 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:15.326957 master-0 kubenswrapper[3187]: I1203 14:08:15.326898 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:15.326957 master-0 kubenswrapper[3187]: I1203 14:08:15.326910 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:15.326957 master-0 kubenswrapper[3187]: E1203 14:08:15.326943 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.327146 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.327441 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.327463 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: E1203 14:08:15.327513 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.327785 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328024 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328047 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328256 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328367 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328379 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328552 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:08:15.328612 master-0 kubenswrapper[3187]: I1203 14:08:15.328598 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:08:15.329746 master-0 kubenswrapper[3187]: I1203 14:08:15.328666 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:08:15.329746 master-0 kubenswrapper[3187]: I1203 14:08:15.328717 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:08:15.329746 master-0 kubenswrapper[3187]: I1203 14:08:15.328782 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:08:15.329746 master-0 kubenswrapper[3187]: I1203 14:08:15.328555 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:08:15.330807 master-0 kubenswrapper[3187]: I1203 14:08:15.330776 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:08:15.331205 master-0 kubenswrapper[3187]: I1203 14:08:15.331179 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:08:15.331558 master-0 kubenswrapper[3187]: I1203 14:08:15.331325 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:08:15.331558 master-0 kubenswrapper[3187]: I1203 14:08:15.331371 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:08:15.331558 master-0 kubenswrapper[3187]: I1203 14:08:15.331408 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:08:15.331558 master-0 kubenswrapper[3187]: I1203 14:08:15.331518 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.331890 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.331926 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.332009 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.332020 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.332103 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.332234 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.332278 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.332342 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.332833 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.332897 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.332981 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: E1203 14:08:15.333028 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:15.333249 master-0 kubenswrapper[3187]: I1203 14:08:15.333223 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:15.333894 master-0 kubenswrapper[3187]: E1203 14:08:15.333318 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:15.333894 master-0 kubenswrapper[3187]: I1203 14:08:15.333474 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:15.333894 master-0 kubenswrapper[3187]: E1203 14:08:15.333518 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:15.339827 master-0 kubenswrapper[3187]: I1203 14:08:15.339657 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:15.339827 master-0 kubenswrapper[3187]: I1203 14:08:15.339675 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:15.339990 master-0 kubenswrapper[3187]: E1203 14:08:15.339790 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:15.340182 master-0 kubenswrapper[3187]: E1203 14:08:15.340108 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:15.342660 master-0 kubenswrapper[3187]: I1203 14:08:15.342511 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44af6af5-cecb-4dc4-b793-e8e350f8a47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"image-registry-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk4tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-65dc4bcb88-96zcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.342660 master-0 kubenswrapper[3187]: I1203 14:08:15.342573 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:15.342906 master-0 kubenswrapper[3187]: I1203 14:08:15.342814 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:15.343164 master-0 kubenswrapper[3187]: I1203 14:08:15.343021 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:15.343164 master-0 kubenswrapper[3187]: E1203 14:08:15.343110 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:15.343278 master-0 kubenswrapper[3187]: I1203 14:08:15.343220 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:15.343315 master-0 kubenswrapper[3187]: I1203 14:08:15.343249 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:15.343625 master-0 kubenswrapper[3187]: E1203 14:08:15.343578 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:15.344903 master-0 kubenswrapper[3187]: I1203 14:08:15.344783 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:08:15.345078 master-0 kubenswrapper[3187]: I1203 14:08:15.345019 3187 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:08:15.345321 master-0 kubenswrapper[3187]: I1203 14:08:15.345125 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:08:15.345430 master-0 kubenswrapper[3187]: I1203 14:08:15.345402 3187 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:08:15.377472 master-0 kubenswrapper[3187]: I1203 14:08:15.377367 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52100521-67e9-40c9-887c-eda6560f06e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T13:57:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T13:55:10Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgq6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-7978bf889c-n64v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.382765 master-0 kubenswrapper[3187]: I1203 14:08:15.382730 3187 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 14:08:15.398116 master-0 kubenswrapper[3187]: I1203 14:08:15.398026 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24dfafc9-86a9-450e-ac62-a871138106c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64dfea633af4d4474c6facea89f78f856a4d29ba0749d89ddb78352c5c8bc092\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T13:57:21Z\\\",\\\"message\\\":\\\".32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nW1203 13:57:07.996083 1 logging.go:55] [core] [Channel #10 SubChannel #12]grpc: addrConn.createTransport failed to connect to {Addr: \\\\\\\"192.168.32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nW1203 13:57:11.385784 1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: \\\\\\\"192.168.32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nW1203 13:57:11.576753 1 logging.go:55] [core] [Channel #10 SubChannel #12]grpc: addrConn.createTransport failed to connect to {Addr: \\\\\\\"192.168.32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nW1203 13:57:16.873079 1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: \\\\\\\"192.168.32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nW1203 13:57:17.128409 1 logging.go:55] [core] [Channel #10 SubChannel #12]grpc: addrConn.createTransport failed to connect to {Addr: \\\\\\\"192.168.32.10:2379\\\\\\\", ServerName: \\\\\\\"192.168.32.10:2379\\\\\\\", }. Err: connection error: desc = \\\\\\\"transport: Error while dialing: dial tcp 192.168.32.10:2379: connect: connection refused\\\\\\\"\\\\nE1203 13:57:21.132914 1 run.go:72] \\\\\\\"command failed\\\\\\\" err=\\\\\\\"unable to load configmap based request-header-client-ca-file: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps extension-apiserver-authentication)\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T13:56:20Z\\\"}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m789m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-57fd58bc7b-kktql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.410559 master-0 kubenswrapper[3187]: I1203 14:08:15.410476 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5b0add1-6a3b-4ab3-9334-83f7416876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}}}],\\\"startTime\\\":\\\"2025-12-03T14:07:52Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.420509 master-0 kubenswrapper[3187]: I1203 14:08:15.420440 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"739dc4db-558c-4492-aca2-06261f310a29\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"startup-monitor\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"manifests\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/secrets\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/configmaps\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"var-log\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.32.10\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2025-12-03T14:07:52Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-startup-monitor-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.431059 master-0 kubenswrapper[3187]: I1203 14:08:15.430999 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180b512-bf0c-4ddc-a5cf-f04acc830a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-snapshot-controller-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-operator-7b795784b8-44frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.443931 master-0 kubenswrapper[3187]: I1203 14:08:15.443848 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa79e15-1875-4865-b5e0-aecd4c447bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"package-server-manager-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7q659\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7q659\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-75b4d49d4c-h599p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.457231 master-0 kubenswrapper[3187]: I1203 14:08:15.457156 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kk4tm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c777c9de-1ace-46be-b5c2-c71d252f53f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eba6e454fefc0e101c8511eee440e174bf61ad4769d6cf0022b4a64c3ee6c93e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T13:53:56Z\\\",\\\"message\\\":\\\"2025-12-03T13:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_df5208fa-c8cb-44ab-9fbc-eb7044c08e97\\\\n2025-12-03T13:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_df5208fa-c8cb-44ab-9fbc-eb7044c08e97 to /host/opt/cni/bin/\\\\n2025-12-03T13:53:11Z [verbose] multus-daemon started\\\\n2025-12-03T13:53:11Z [verbose] Readiness Indicator file check\\\\n2025-12-03T13:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T13:53:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5fn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-kk4tm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.469767 master-0 kubenswrapper[3187]: I1203 14:08:15.469692 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v429m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-pcchm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.484354 master-0 kubenswrapper[3187]: I1203 14:08:15.484235 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/private\\\",\\\"name\\\":\\\"default-certificate\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/haproxy/conf/metrics-auth\\\",\\\"name\\\":\\\"stats-auth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-certs\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57rrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ingress\"/\"router-default-54f97f57-rr9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.494832 master-0 kubenswrapper[3187]: I1203 14:08:15.494743 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eecc43f5-708f-4395-98cc-696b243d6321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/mcs\\\",\\\"name\\\":\\\"certs\\\"},{\\\"mountPath\\\":\\\"/etc/mcs/bootstrap-token\\\",\\\"name\\\":\\\"node-bootstrap-token\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-pvrfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.510002 master-0 kubenswrapper[3187]: I1203 14:08:15.509879 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-server\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-session\\\",\\\"name\\\":\\\"v4-0-config-system-session\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-cliconfig\\\",\\\"name\\\":\\\"v4-0-config-system-cliconfig\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-serving-cert\\\",\\\"name\\\":\\\"v4-0-config-system-serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-service-ca\\\",\\\"name\\\":\\\"v4-0-config-system-service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-router-certs\\\",\\\"name\\\":\\\"v4-0-config-system-router-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-ocp-branding-template\\\",\\\"name\\\":\\\"v4-0-config-system-ocp-branding-template\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-login\\\",\\\"name\\\":\\\"v4-0-config-user-template-login\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-provider-selection\\\",\\\"name\\\":\\\"v4-0-config-user-template-provider-selection\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-error\\\",\\\"name\\\":\\\"v4-0-config-user-template-error\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle\\\",\\\"name\\\":\\\"v4-0-config-system-trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v7d88\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-747bdb58b5-mn76f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.520311 master-0 kubenswrapper[3187]: I1203 14:08:15.520168 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44af6af5-cecb-4dc4-b793-e8e350f8a47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"image-registry-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk4tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-65dc4bcb88-96zcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.534855 master-0 kubenswrapper[3187]: I1203 14:08:15.534765 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a969ddd4-e20d-4dd2-84f4-a140bac65df0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/kubelet/\\\",\\\"name\\\":\\\"node-pullsecrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/image-import-ca\\\",\\\"name\\\":\\\"image-import-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/openshift-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbzpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbzpz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-6985f84b49-v9vlg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.546747 master-0 kubenswrapper[3187]: I1203 14:08:15.546645 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b95a5a6-db93-4a58-aaff-3619d130c8cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-storage-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-storage-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"cluster-storage-operator-f84784664-ntb9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.559610 master-0 kubenswrapper[3187]: I1203 14:08:15.559497 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b340553b-d483-4839-8328-518f27770832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"samples-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92p99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92p99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-6d64b47964-jjd7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.569293 master-0 kubenswrapper[3187]: I1203 14:08:15.569256 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:15.569692 master-0 kubenswrapper[3187]: I1203 14:08:15.569653 3187 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:15.575362 master-0 kubenswrapper[3187]: I1203 14:08:15.575325 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:15.575792 master-0 kubenswrapper[3187]: I1203 14:08:15.575728 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690d1f81-7b1f-4fd0-9b6e-154c9687c744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"baremetal-kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/baremetal-kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-baremetal-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wh8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6968e593235a88afa79edc8b10d573d1fb5a5c2405e9515654503c6e600e218c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T13:58:43Z\\\",\\\"message\\\":\\\"E1203 13:58:43.611038 1 main.go:144] \\\\\\\"unable to get enabled features\\\\\\\" err=\\\\\\\"unable to determine Platform: Get \\\\\\\\\\\\\\\"https://172.30.0.1:443/apis/config.openshift.io/v1/infrastructures/cluster\\\\\\\\\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T13:58:43Z\\\"}},\\\"name\\\":\\\"cluster-baremetal-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/images\\\",\\\"name\\\":\\\"images\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wh8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-baremetal-operator-5fdc576499-j2n8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.592708 master-0 kubenswrapper[3187]: E1203 14:08:15.592650 3187 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:15.592973 master-0 kubenswrapper[3187]: E1203 14:08:15.592763 3187 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:15.604384 master-0 kubenswrapper[3187]: I1203 14:08:15.604312 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12822200-5857-4e2a-96bf-31c2d917ae9e\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager\"/\"controller-manager-5c8b4c9687-4pxw5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.617331 master-0 kubenswrapper[3187]: I1203 14:08:15.617246 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52100521-67e9-40c9-887c-eda6560f06e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62452044bee06eae6437134d1f4ed9d51414f96ec17f88afa01c1f2dd91793ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T13:57:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T13:55:10Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgq6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-7978bf889c-n64v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.631441 master-0 kubenswrapper[3187]: I1203 14:08:15.631349 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97c85a3404185590aa244f99da41b5cf3aff42184641a233e35eb7bc3ab8d12c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T13:57:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T13:55:15Z\\\"}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrngd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-56f5898f45-fhnc5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.647002 master-0 kubenswrapper[3187]: I1203 14:08:15.646901 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e97e1725-cb55-4ce3-952d-a4fd0731577d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://338a3f1b4232df3516e274dce252d29a4b6cb984b54c40d11e848ad1fa67e237\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T13:56:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T13:52:09Z\\\"}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9hpt5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-6cbf58c977-8lh6n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.661293 master-0 kubenswrapper[3187]: I1203 14:08:15.661155 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcc78129-4a81-410e-9a42-b12043b5a75a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69fbac6ffb9329c164910a1a0e4f9cc030093f8a21615d5112059f48f8818e91\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T13:57:57Z\\\",\\\"message\\\":\\\"ller-runtime/pkg/cache/internal/informers.go:106: watch of *v1.Role ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW1203 13:57:57.764414 1 reflector.go:484] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:106: watch of *v1.DaemonSet ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\nW1203 13:57:57.764409 1 reflector.go:484] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:106: watch of *v1.DNSRecord ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2025-12-03T13:57:57.764Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tStopping and waiting for webhooks\\\\nW1203 13:57:57.764479 1 reflector.go:484] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:106: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2025-12-03T13:57:57.764Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tStopping and waiting for HTTP servers\\\\nW1203 13:57:57.764310 1 reflector.go:484] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:106: watch of *v1.Deployment ended with: an error on the server (\\\\\\\"unable to decode an event from the watch stream: context canceled\\\\\\\") has prevented the request from succeeding\\\\n2025-12-03T13:57:57.764Z\\\\tINFO\\\\toperator.init.controller-runtime.metrics\\\\truntime/asm_amd64.s:1695\\\\tShutting down metrics server with timeout of 1 minute\\\\n2025-12-03T13:57:57.764Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tWait completed, proceeding to shutdown the manager\\\\n2025-12-03T13:57:57.768Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:989\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for canary_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T13:55:56Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x22gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x22gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-85dbd94574-8jfp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.671656 master-0 kubenswrapper[3187]: I1203 14:08:15.671562 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36da3c2f-860c-4188-a7d7-5b615981a835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/signing-key\\\",\\\"name\\\":\\\"signing-key\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/signing-cabundle\\\",\\\"name\\\":\\\"signing-cabundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzlgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-6b8bb995f7-b68p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.690239 master-0 kubenswrapper[3187]: I1203 14:08:15.690088 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4dd1d142-6569-438d-b0c2-582aed44812d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/serving-cert\\\",\\\"name\\\":\\\"console-serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/oauth-config\\\",\\\"name\\\":\\\"console-oauth-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/console-config\\\",\\\"name\\\":\\\"console-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/service-ca\\\",\\\"name\\\":\\\"service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/oauth-serving-cert\\\",\\\"name\\\":\\\"oauth-serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gfzrw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"console-c5d7cd7f9-2hp75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.701691 master-0 kubenswrapper[3187]: I1203 14:08:15.701573 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/dns-default-5m4f8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4669137a-fbc4-41e1-8eeb-5f06b9da2641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7cvkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-5m4f8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.713047 master-0 kubenswrapper[3187]: I1203 14:08:15.712917 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-approver-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"machine-approver-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsjls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-approver-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gsjls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-machine-approver\"/\"machine-approver-cb84b9cdf-qn94w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.722551 master-0 kubenswrapper[3187]: I1203 14:08:15.722492 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c314fa4-1222-42cf-b87a-f2cd19e67dde\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29412840-nfbpl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.752804 master-0 kubenswrapper[3187]: I1203 14:08:15.752720 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-4-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b1e0884-ff54-419b-90d3-25f561a6391d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"reason\\\":\\\"PodFailed\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"reason\\\":\\\"PodFailed\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"installer\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/\\\",\\\"name\\\":\\\"kubelet-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"}]}],\\\"phase\\\":\\\"Failed\\\",\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-4-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.800959 master-0 kubenswrapper[3187]: I1203 14:08:15.800728 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/alertmanager-main-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff21a9a5-706f-4c71-bd0c-5586374f819a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"alertmanager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/certs\\\",\\\"name\\\":\\\"tls-assets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/alertmanager\\\",\\\"name\\\":\\\"alertmanager-main-db\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"alertmanager-trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"config-reloader\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-metric\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prom-label-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-52zj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"alertmanager-main-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.831554 master-0 kubenswrapper[3187]: I1203 14:08:15.831444 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"583797aa-63a8-4a46-b3e4-5163c7673be3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:54Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":5,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:07:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"startTime\\\":\\\"2025-12-03T14:07:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.882081 master-0 kubenswrapper[3187]: I1203 14:08:15.881925 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77430348-b53a-4898-8047-be8bb542a0a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txl6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.914770 master-0 kubenswrapper[3187]: I1203 14:08:15.914652 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c1ebb9-f052-410b-a999-45e9b75b0e58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvzf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvzf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ch7xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.956019 master-0 kubenswrapper[3187]: I1203 14:08:15.955896 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69b752ed-691c-4574-a01e-428d4bf85b75\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cache/\\\",\\\"name\\\":\\\"cache\\\"},{\\\"mountPath\\\":\\\"/var/certs\\\",\\\"name\\\":\\\"catalogserver-certs\\\"},{\\\"mountPath\\\":\\\"/var/ca-certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/containers\\\",\\\"name\\\":\\\"etc-containers\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/docker\\\",\\\"name\\\":\\\"etc-docker\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t8knq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-catalogd\"/\"catalogd-controller-manager-754cfd84-qf898\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:15.992111 master-0 kubenswrapper[3187]: I1203 14:08:15.992004 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e0a2889-39a5-471e-bd46-958e2f8eacaa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus-operator-admission-webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"tls-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:16.038693 master-0 kubenswrapper[3187]: I1203 14:08:16.038599 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f484c1-1564-49c7-a43d-bd8b971cea20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"machine-api-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjbsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/machine-api-operator-config/images\\\",\\\"name\\\":\\\"images\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjbsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-7486ff55f-wcnxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:16.048006 master-0 kubenswrapper[3187]: I1203 14:08:16.047700 3187 trace.go:236] Trace[1831655263]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:07:56.387) (total time: 19659ms): Dec 03 14:08:16.048006 master-0 kubenswrapper[3187]: Trace[1831655263]: ---"Objects listed" error: 19659ms (14:08:16.047) Dec 03 14:08:16.048006 master-0 kubenswrapper[3187]: Trace[1831655263]: [19.659728719s] [19.659728719s] END Dec 03 14:08:16.048006 master-0 kubenswrapper[3187]: I1203 14:08:16.047750 3187 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:16.065506 master-0 kubenswrapper[3187]: I1203 14:08:16.065148 3187 trace.go:236] Trace[828485400]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:07:56.443) (total time: 19621ms): Dec 03 14:08:16.065506 master-0 kubenswrapper[3187]: Trace[828485400]: ---"Objects listed" error: 19621ms (14:08:16.065) Dec 03 14:08:16.065506 master-0 kubenswrapper[3187]: Trace[828485400]: [19.62112616s] [19.62112616s] END Dec 03 14:08:16.065506 master-0 kubenswrapper[3187]: I1203 14:08:16.065185 3187 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:16.066627 master-0 kubenswrapper[3187]: I1203 14:08:16.066466 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:16.091452 master-0 kubenswrapper[3187]: I1203 14:08:16.089302 3187 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 14:08:16.132443 master-0 kubenswrapper[3187]: I1203 14:08:16.132293 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e3675c78-1902-4b92-8a93-cf2dc316f060\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:08:15Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls-cert\\\",\\\"name\\\":\\\"cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-28n2f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-vkpv4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:16.173437 master-0 kubenswrapper[3187]: I1203 14:08:16.173344 3187 status_manager.go:875] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ba502ba-1179-478e-b4b9-f3409320b0ad\\\"},\\\"status\\\":{\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/client-ca\\\",\\\"name\\\":\\\"client-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lq4dz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-678c7f799b-4b7nv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190614 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190673 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190708 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190736 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190759 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190781 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190806 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190830 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190852 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: E1203 14:08:16.190860 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190876 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.190822 master-0 kubenswrapper[3187]: I1203 14:08:16.190900 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.190939 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.69091797 +0000 UTC m=+24.657453865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.190960 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191022 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191052 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191080 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191107 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191133 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191165 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191177 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191205 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191217 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.691205378 +0000 UTC m=+24.657741273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191238 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191264 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191288 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191310 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191332 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191374 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191398 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191437 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191461 3187 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191506 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.691495546 +0000 UTC m=+24.658031441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191463 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191799 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191829 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191523 3187 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191638 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191860 3187 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.191953 3187 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.192002 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.69196361 +0000 UTC m=+24.658499505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.192029 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692017781 +0000 UTC m=+24.658553676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.192049 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692037642 +0000 UTC m=+24.658573537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: I1203 14:08:16.191881 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.192070 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:16.192669 master-0 kubenswrapper[3187]: E1203 14:08:16.192121 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692106584 +0000 UTC m=+24.658642479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192146 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692138265 +0000 UTC m=+24.658674160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192185 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192173 3187 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192251 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192315 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192349 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192379 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192320 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192410 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192457 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192484 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192514 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192522 3187 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192538 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192542 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192570 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192580 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692566477 +0000 UTC m=+24.659102582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192598 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192624 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192659 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192687 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192714 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192745 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192775 3187 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192811 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692801374 +0000 UTC m=+24.659337269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192771 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192827 3187 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192860 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692850195 +0000 UTC m=+24.659386090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192855 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192903 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.192884 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.192946 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.692922487 +0000 UTC m=+24.659458382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.193032 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.193052 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.193122 3187 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: I1203 14:08:16.193125 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.194839 master-0 kubenswrapper[3187]: E1203 14:08:16.193150 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693121673 +0000 UTC m=+24.659657758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193163 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193176 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693165074 +0000 UTC m=+24.659701199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193193 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693184914 +0000 UTC m=+24.659721040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193227 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193256 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193287 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193313 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193317 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193314 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193323 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693302088 +0000 UTC m=+24.659837983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193432 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193519 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193494 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193606 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193619 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693603736 +0000 UTC m=+24.660139821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193648 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693637897 +0000 UTC m=+24.660174022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193667 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193714 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193743 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193753 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193770 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193793 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193819 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193836 3187 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193847 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193870 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693859874 +0000 UTC m=+24.660395769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193889 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193899 3187 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193917 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.193934 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.693925826 +0000 UTC m=+24.660461721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193965 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.193995 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: E1203 14:08:16.194015 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.194025 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.194048 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.204263 master-0 kubenswrapper[3187]: I1203 14:08:16.194070 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194090 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194112 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194123 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194157 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194164 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194166 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194179 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694166492 +0000 UTC m=+24.660702577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194202 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194136 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194204 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694195783 +0000 UTC m=+24.660731918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194160 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194250 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194262 3187 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194269 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694259815 +0000 UTC m=+24.660795710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194331 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194355 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194375 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694357448 +0000 UTC m=+24.660893343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194399 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694390879 +0000 UTC m=+24.660926764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194435 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694412369 +0000 UTC m=+24.660948264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194443 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194452 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.69444572 +0000 UTC m=+24.660981835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194488 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694475801 +0000 UTC m=+24.661011886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194491 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194541 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194582 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194618 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194705 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194751 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194801 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194824 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194843 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194865 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: I1203 14:08:16.194867 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194893 3187 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194931 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694920264 +0000 UTC m=+24.661456359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194953 3187 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:16.207539 master-0 kubenswrapper[3187]: E1203 14:08:16.194970 3187 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.194989 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.694976366 +0000 UTC m=+24.661512261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195020 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695009246 +0000 UTC m=+24.661545341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195106 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195138 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.6951285 +0000 UTC m=+24.661664585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.194896 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195188 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195189 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195221 3187 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195250 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195268 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695257464 +0000 UTC m=+24.661793359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195286 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195312 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195336 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195358 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195413 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195383 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195430 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195457 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695448769 +0000 UTC m=+24.661984664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195334 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195478 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195504 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195539 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195548 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695535531 +0000 UTC m=+24.662071626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195575 3187 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195632 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695620854 +0000 UTC m=+24.662156749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195646 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195665 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195688 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.695678836 +0000 UTC m=+24.662214731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195668 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.195783 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.195866 3187 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: E1203 14:08:16.197333 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697317042 +0000 UTC m=+24.663852937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.197319 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.197369 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.210357 master-0 kubenswrapper[3187]: I1203 14:08:16.197392 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197431 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197451 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197479 3187 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197510 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197480 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197523 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697512168 +0000 UTC m=+24.664048063 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197598 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697575779 +0000 UTC m=+24.664111854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197622 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197654 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197684 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197689 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197705 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197737 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697723284 +0000 UTC m=+24.664259399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197758 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197769 3187 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197764 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197799 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697792276 +0000 UTC m=+24.664328161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197816 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697809146 +0000 UTC m=+24.664345041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197829 3187 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197863 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197833 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197864 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697853597 +0000 UTC m=+24.664389492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197873 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.197908 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.697899779 +0000 UTC m=+24.664435874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197935 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197959 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.197980 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.198002 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.198023 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198040 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198052 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198082 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698073544 +0000 UTC m=+24.664609439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198096 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698089414 +0000 UTC m=+24.664625299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198098 3187 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: E1203 14:08:16.198119 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:16.211686 master-0 kubenswrapper[3187]: I1203 14:08:16.198041 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198121 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698116715 +0000 UTC m=+24.664652610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198171 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198196 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198218 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198229 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198242 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698231038 +0000 UTC m=+24.664766943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198268 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698255289 +0000 UTC m=+24.664791194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198293 3187 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198320 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.69831289 +0000 UTC m=+24.664848785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198364 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198388 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198429 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198450 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198467 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198487 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198505 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198524 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198545 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198572 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198590 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198609 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198631 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198650 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198669 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198688 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198706 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198725 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198742 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198785 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198801 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198818 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198852 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: E1203 14:08:16.198858 3187 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198872 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.212790 master-0 kubenswrapper[3187]: I1203 14:08:16.198894 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.198905 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.698894167 +0000 UTC m=+24.665430062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.198917 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.198935 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.198953 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.198973 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199000 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199033 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199062 3187 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199114 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199227 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199243 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699233947 +0000 UTC m=+24.665769842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199061 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199294 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699275518 +0000 UTC m=+24.665811613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199314 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199330 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199345 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.69933543 +0000 UTC m=+24.665871325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199386 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199442 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199470 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199481 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199496 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699489274 +0000 UTC m=+24.666025169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199513 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199547 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199575 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199604 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199643 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199689 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699645248 +0000 UTC m=+24.666181233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199721 3187 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199748 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: E1203 14:08:16.199759 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699748911 +0000 UTC m=+24.666285016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199761 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199786 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199809 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:16.214353 master-0 kubenswrapper[3187]: I1203 14:08:16.199705 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.199784 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.699775462 +0000 UTC m=+24.666311557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.199869 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.199956 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.199970 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.199985 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.199997 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.200015 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200033 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.700024909 +0000 UTC m=+24.666560804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.199982 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.199982 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.200684 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.200713 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.200392 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.199866 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200143 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201209 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201332 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701315026 +0000 UTC m=+24.667851111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200157 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200194 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201345 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201361 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200209 3187 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200219 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.200244 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201236 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201388 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701378978 +0000 UTC m=+24.667915103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201398 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201479 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201505 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701496761 +0000 UTC m=+24.668032846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201532 3187 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201543 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201557 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701548353 +0000 UTC m=+24.668084238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201586 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: I1203 14:08:16.201617 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201626 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701614174 +0000 UTC m=+24.668150069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:16.216702 master-0 kubenswrapper[3187]: E1203 14:08:16.201660 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701651446 +0000 UTC m=+24.668187341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.201674 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701669036 +0000 UTC m=+24.668204931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.201689 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701681376 +0000 UTC m=+24.668217271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.201705 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701698727 +0000 UTC m=+24.668234622 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.201728 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.201756 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.201765 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.201777 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.201801 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.201960 3187 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.201997 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.701987415 +0000 UTC m=+24.668523310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202033 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202061 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202087 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202118 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202156 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202187 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202213 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202242 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202268 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202292 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202320 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202356 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202382 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202823 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202852 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202877 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202903 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202934 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202954 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.202975 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203017 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203019 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.203024 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203133 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203074 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.703056905 +0000 UTC m=+24.669592800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: I1203 14:08:16.203160 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203078 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:16.218730 master-0 kubenswrapper[3187]: E1203 14:08:16.203275 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.205874 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.206128 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.206193 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.206981 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.203818 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.70322281 +0000 UTC m=+24.669758835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.207158 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.707140122 +0000 UTC m=+24.673676017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.207213 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.707203954 +0000 UTC m=+24.673739849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.207345 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.208946 3187 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.208989 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209103 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.709072707 +0000 UTC m=+24.675608612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.209410 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209535 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.709512639 +0000 UTC m=+24.676048534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209635 3187 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209695 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209744 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.709735886 +0000 UTC m=+24.676271781 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209783 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.709771687 +0000 UTC m=+24.676307582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.209817 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.209823 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.210008 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.709993683 +0000 UTC m=+24.676529578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210071 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210152 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210284 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210332 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210370 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210452 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210484 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210515 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210538 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210574 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: I1203 14:08:16.210592 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.210684 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.210768 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.710757965 +0000 UTC m=+24.677293860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:16.220018 master-0 kubenswrapper[3187]: E1203 14:08:16.210774 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.210846 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.210948 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.210983 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.710923399 +0000 UTC m=+24.677459294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211049 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211165 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211261 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211284 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211345 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211379 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211564 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211636 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211678 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211715 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211737 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211743 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211857 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.211999 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212045 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212080 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212106 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212132 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212154 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212190 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212220 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212249 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212274 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212334 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212366 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212397 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212449 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212222 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212509 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212548 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212567 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212612 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212666 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212685 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212709 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.71269406 +0000 UTC m=+24.679229955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: I1203 14:08:16.212666 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.221155 master-0 kubenswrapper[3187]: E1203 14:08:16.212342 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212436 3187 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212445 3187 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.212614 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212461 3187 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212738 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.712718981 +0000 UTC m=+24.679254876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212946 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.712922026 +0000 UTC m=+24.679457921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212971 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.712962257 +0000 UTC m=+24.679498152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213006 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.712987778 +0000 UTC m=+24.679523673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213029 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713018339 +0000 UTC m=+24.679554234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.212810 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213060 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713042 +0000 UTC m=+24.679577885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213085 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713075841 +0000 UTC m=+24.679611736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213111 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213202 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713165613 +0000 UTC m=+24.679701538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213254 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213325 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713303397 +0000 UTC m=+24.679839292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213317 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213359 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213393 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213396 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213434 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213461 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713452091 +0000 UTC m=+24.679987986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213490 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213537 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213648 3187 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.213771 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.213993 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.713828822 +0000 UTC m=+24.680364727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.214039 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.214049 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: E1203 14:08:16.214094 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.714085299 +0000 UTC m=+24.680621194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.214092 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.222303 master-0 kubenswrapper[3187]: I1203 14:08:16.214141 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214157 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214189 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214219 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214256 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214284 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214314 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214343 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214373 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214406 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214430 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214528 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214536 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.714513682 +0000 UTC m=+24.681049577 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214598 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214643 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214705 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.714686887 +0000 UTC m=+24.681222782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214720 3187 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214768 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.214780 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.714768319 +0000 UTC m=+24.681304214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214816 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214905 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214947 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.214989 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215028 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.215038 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.215084 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215098 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.715079438 +0000 UTC m=+24.681615333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215109 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215117 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.215119 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215160 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.7151483 +0000 UTC m=+24.681684195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215194 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.71517386 +0000 UTC m=+24.681709755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215190 3187 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215203 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.215212 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215213 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: I1203 14:08:16.215237 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.225319 master-0 kubenswrapper[3187]: E1203 14:08:16.215238 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.715228962 +0000 UTC m=+24.681764847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.215369 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.215644 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.715600392 +0000 UTC m=+24.682136287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.215679 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.715671415 +0000 UTC m=+24.682207310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216493 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216557 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216593 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216629 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216650 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216674 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216701 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216729 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216760 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216787 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216816 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.216819 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.216939 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.216965 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.716935421 +0000 UTC m=+24.683471316 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.216995 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.716985362 +0000 UTC m=+24.683521257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.216997 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217016 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.216836 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217046 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717036173 +0000 UTC m=+24.683572068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217106 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217079 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217110 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717097015 +0000 UTC m=+24.683633130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.217244 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.217317 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: I1203 14:08:16.217318 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217364 3187 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217377 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717341602 +0000 UTC m=+24.683877497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:16.226649 master-0 kubenswrapper[3187]: E1203 14:08:16.217407 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717393644 +0000 UTC m=+24.683929549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217407 3187 status_manager.go:875] "Failed to update status for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6cb85e17-7e83-4845-834b-381f63dce73e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:07:52Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T14:08:05Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T14:07:53Z\\\"}},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":8,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:08:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]}],\\\"startTime\\\":\\\"2025-12-03T14:07:52Z\\\"}}\" for pod \"kube-system\"/\"bootstrap-kube-controller-manager-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217493 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217531 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717505197 +0000 UTC m=+24.684041092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217565 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217599 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217599 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217623 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217654 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217680 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217700 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217722 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217745 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217769 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217793 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217783 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217809 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217825 3187 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217833 3187 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217817 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217886 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217896 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717876587 +0000 UTC m=+24.684412672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.217954 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.217958 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717945719 +0000 UTC m=+24.684481614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: E1203 14:08:16.218010 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.717996841 +0000 UTC m=+24.684532926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:16.227651 master-0 kubenswrapper[3187]: I1203 14:08:16.218042 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218082 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218119 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218156 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218180 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218200 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718188316 +0000 UTC m=+24.684724211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218156 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218235 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218250 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218265 3187 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218275 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218302 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718285959 +0000 UTC m=+24.684822034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218339 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218357 3187 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218364 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218376 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218385 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218401 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718388582 +0000 UTC m=+24.684924677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218384 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218377 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218484 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718468184 +0000 UTC m=+24.685004079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218497 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218511 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718501135 +0000 UTC m=+24.685037230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218537 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718524916 +0000 UTC m=+24.685061021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218564 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218599 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218633 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218665 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218704 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218723 3187 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218737 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218756 3187 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218771 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718760352 +0000 UTC m=+24.685296487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218815 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: I1203 14:08:16.218853 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218873 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:16.228395 master-0 kubenswrapper[3187]: E1203 14:08:16.218879 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718863155 +0000 UTC m=+24.685399250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.218913 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.218947 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.218958 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.218989 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.718979649 +0000 UTC m=+24.685515534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219007 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719001579 +0000 UTC m=+24.685537474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219024 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219074 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219028 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219123 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719115893 +0000 UTC m=+24.685651788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219121 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219158 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219175 3187 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219184 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219208 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219229 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219276 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219314 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719227826 +0000 UTC m=+24.685763901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219345 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219427 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219439 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719429981 +0000 UTC m=+24.685965956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219479 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219480 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219493 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219502 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719495813 +0000 UTC m=+24.686031708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219588 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219615 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219640 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219697 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219721 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219746 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219769 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219791 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219812 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: I1203 14:08:16.219837 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219842 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:16.230116 master-0 kubenswrapper[3187]: E1203 14:08:16.219899 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.219917 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719905425 +0000 UTC m=+24.686441320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.219936 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719928046 +0000 UTC m=+24.686463941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.219957 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.719947616 +0000 UTC m=+24.686483511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.219859 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.219993 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220047 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220095 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.720066 +0000 UTC m=+24.686601895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220122 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220171 3187 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220194 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.720188033 +0000 UTC m=+24.686723928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220217 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220239 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220260 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220313 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220335 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220361 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220383 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220390 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220406 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220443 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220465 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220486 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220497 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220790 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.72077947 +0000 UTC m=+24.687315365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220508 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220823 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220877 3187 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220896 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220935 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220976 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.720967515 +0000 UTC m=+24.687503410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220640 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.220662 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.221018 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721012517 +0000 UTC m=+24.687548412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: E1203 14:08:16.221025 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:16.231136 master-0 kubenswrapper[3187]: I1203 14:08:16.220829 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221056 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721047598 +0000 UTC m=+24.687583493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221076 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221084 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221106 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721088829 +0000 UTC m=+24.687624914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221138 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.72112658 +0000 UTC m=+24.687662715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221172 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721161251 +0000 UTC m=+24.687697146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221202 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221234 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221244 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721236823 +0000 UTC m=+24.687772718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221263 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221287 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221309 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221334 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221356 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221367 3187 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221388 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221389 3187 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221429 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721408278 +0000 UTC m=+24.687944173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221378 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221454 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721441259 +0000 UTC m=+24.687977374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221486 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221486 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.72147508 +0000 UTC m=+24.688011215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221510 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721503641 +0000 UTC m=+24.688039756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221532 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221560 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221580 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221605 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221580 3187 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221635 3187 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221629 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: I1203 14:08:16.221671 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221662 3187 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:16.232260 master-0 kubenswrapper[3187]: E1203 14:08:16.221666 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721657335 +0000 UTC m=+24.688193230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221649 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221766 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221788 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721780008 +0000 UTC m=+24.688315893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221623 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221810 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221821 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721815469 +0000 UTC m=+24.688351364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221861 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.72185207 +0000 UTC m=+24.688387955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221881 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.721875731 +0000 UTC m=+24.688411616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221897 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221925 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221951 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221957 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.221977 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.221992 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222005 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222026 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722017935 +0000 UTC m=+24.688553830 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222030 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222053 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222088 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222116 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722102778 +0000 UTC m=+24.688638913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222140 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222150 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222169 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722162619 +0000 UTC m=+24.688698514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222189 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222225 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222256 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222275 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222286 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222302 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722294693 +0000 UTC m=+24.688830588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222349 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222372 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722366095 +0000 UTC m=+24.688901990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222442 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: E1203 14:08:16.222528 3187 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:16.233291 master-0 kubenswrapper[3187]: I1203 14:08:16.222342 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222572 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722556251 +0000 UTC m=+24.689092326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222645 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222655 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222690 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722680964 +0000 UTC m=+24.689217069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222690 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222703 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222739 3187 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222761 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222773 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722764766 +0000 UTC m=+24.689300661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222803 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222795 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222844 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222873 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222886 3187 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222889 3187 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222896 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222930 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722920301 +0000 UTC m=+24.689456386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222943 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222954 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722942691 +0000 UTC m=+24.689478586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222954 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222973 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.222979 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.222995 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.722981153 +0000 UTC m=+24.689517228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.223013 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.723004853 +0000 UTC m=+24.689540748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223037 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223066 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223091 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223119 3187 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223204 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: E1203 14:08:16.223235 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.723215589 +0000 UTC m=+24.689751484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.223556 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.224632 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.226740 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.227460 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:16.234510 master-0 kubenswrapper[3187]: I1203 14:08:16.227536 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.241514 master-0 kubenswrapper[3187]: I1203 14:08:16.241460 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.260613 master-0 kubenswrapper[3187]: I1203 14:08:16.260540 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.282465 master-0 kubenswrapper[3187]: I1203 14:08:16.282317 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.297264 master-0 kubenswrapper[3187]: E1203 14:08:16.295656 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:16.297264 master-0 kubenswrapper[3187]: E1203 14:08:16.295714 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.297264 master-0 kubenswrapper[3187]: E1203 14:08:16.295733 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.297264 master-0 kubenswrapper[3187]: E1203 14:08:16.295838 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.795807195 +0000 UTC m=+24.762343080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.314990 master-0 kubenswrapper[3187]: I1203 14:08:16.314920 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:16.321051 master-0 kubenswrapper[3187]: I1203 14:08:16.320984 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.324768 master-0 kubenswrapper[3187]: I1203 14:08:16.324685 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.325038 master-0 kubenswrapper[3187]: I1203 14:08:16.324867 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.325038 master-0 kubenswrapper[3187]: I1203 14:08:16.324872 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.325038 master-0 kubenswrapper[3187]: I1203 14:08:16.325007 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.325135 master-0 kubenswrapper[3187]: I1203 14:08:16.325093 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.325166 master-0 kubenswrapper[3187]: I1203 14:08:16.325139 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.325296 master-0 kubenswrapper[3187]: I1203 14:08:16.325270 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.325402 master-0 kubenswrapper[3187]: I1203 14:08:16.325370 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.325461 master-0 kubenswrapper[3187]: I1203 14:08:16.325413 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.325621 master-0 kubenswrapper[3187]: I1203 14:08:16.325595 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.325700 master-0 kubenswrapper[3187]: I1203 14:08:16.325683 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.325892 master-0 kubenswrapper[3187]: I1203 14:08:16.325602 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.325892 master-0 kubenswrapper[3187]: I1203 14:08:16.325742 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.325892 master-0 kubenswrapper[3187]: I1203 14:08:16.325778 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.325892 master-0 kubenswrapper[3187]: I1203 14:08:16.325818 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.325892 master-0 kubenswrapper[3187]: I1203 14:08:16.325864 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.326033 master-0 kubenswrapper[3187]: I1203 14:08:16.325976 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.326033 master-0 kubenswrapper[3187]: I1203 14:08:16.326013 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.326093 master-0 kubenswrapper[3187]: I1203 14:08:16.326037 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.326093 master-0 kubenswrapper[3187]: I1203 14:08:16.326052 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.326093 master-0 kubenswrapper[3187]: I1203 14:08:16.326089 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.326224 master-0 kubenswrapper[3187]: I1203 14:08:16.326202 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.326502 master-0 kubenswrapper[3187]: I1203 14:08:16.326415 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.326573 master-0 kubenswrapper[3187]: I1203 14:08:16.326462 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.326742 master-0 kubenswrapper[3187]: I1203 14:08:16.326709 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:16.326863 master-0 kubenswrapper[3187]: I1203 14:08:16.326785 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:16.327498 master-0 kubenswrapper[3187]: I1203 14:08:16.327369 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.327629 master-0 kubenswrapper[3187]: I1203 14:08:16.327531 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.327679 master-0 kubenswrapper[3187]: I1203 14:08:16.327626 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.327918 master-0 kubenswrapper[3187]: I1203 14:08:16.327867 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.327981 master-0 kubenswrapper[3187]: I1203 14:08:16.327919 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.327981 master-0 kubenswrapper[3187]: I1203 14:08:16.327957 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328051 master-0 kubenswrapper[3187]: I1203 14:08:16.328010 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.328162 master-0 kubenswrapper[3187]: I1203 14:08:16.328068 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328162 master-0 kubenswrapper[3187]: I1203 14:08:16.328148 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.328229 master-0 kubenswrapper[3187]: I1203 14:08:16.328211 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.328367 master-0 kubenswrapper[3187]: I1203 14:08:16.328310 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.328367 master-0 kubenswrapper[3187]: I1203 14:08:16.328338 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328367 master-0 kubenswrapper[3187]: I1203 14:08:16.328313 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328507 master-0 kubenswrapper[3187]: I1203 14:08:16.328399 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328507 master-0 kubenswrapper[3187]: I1203 14:08:16.328466 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328507 master-0 kubenswrapper[3187]: I1203 14:08:16.328483 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328672 master-0 kubenswrapper[3187]: I1203 14:08:16.328513 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.328672 master-0 kubenswrapper[3187]: I1203 14:08:16.328577 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.328672 master-0 kubenswrapper[3187]: I1203 14:08:16.328613 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328781 master-0 kubenswrapper[3187]: I1203 14:08:16.328725 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328781 master-0 kubenswrapper[3187]: I1203 14:08:16.328764 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328850 master-0 kubenswrapper[3187]: I1203 14:08:16.328761 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.328850 master-0 kubenswrapper[3187]: I1203 14:08:16.328822 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.328910 master-0 kubenswrapper[3187]: I1203 14:08:16.328873 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.328910 master-0 kubenswrapper[3187]: I1203 14:08:16.328882 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.328910 master-0 kubenswrapper[3187]: I1203 14:08:16.328889 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.328992 master-0 kubenswrapper[3187]: I1203 14:08:16.328949 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.329027 master-0 kubenswrapper[3187]: I1203 14:08:16.328994 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.329027 master-0 kubenswrapper[3187]: I1203 14:08:16.329021 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329098 master-0 kubenswrapper[3187]: I1203 14:08:16.328953 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:16.329098 master-0 kubenswrapper[3187]: I1203 14:08:16.329079 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329168 master-0 kubenswrapper[3187]: I1203 14:08:16.329084 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329168 master-0 kubenswrapper[3187]: I1203 14:08:16.329118 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.329168 master-0 kubenswrapper[3187]: I1203 14:08:16.329161 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.329168 master-0 kubenswrapper[3187]: I1203 14:08:16.329103 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329161 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329218 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329228 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329260 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329298 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329337 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.329334 master-0 kubenswrapper[3187]: I1203 14:08:16.329344 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.329592 master-0 kubenswrapper[3187]: I1203 14:08:16.329370 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:16.329592 master-0 kubenswrapper[3187]: I1203 14:08:16.329487 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.329669 master-0 kubenswrapper[3187]: I1203 14:08:16.329631 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.329710 master-0 kubenswrapper[3187]: I1203 14:08:16.329688 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.329746 master-0 kubenswrapper[3187]: I1203 14:08:16.329721 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.329819 master-0 kubenswrapper[3187]: I1203 14:08:16.329774 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.329819 master-0 kubenswrapper[3187]: I1203 14:08:16.329811 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.329904 master-0 kubenswrapper[3187]: I1203 14:08:16.329836 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.329904 master-0 kubenswrapper[3187]: I1203 14:08:16.329862 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.329904 master-0 kubenswrapper[3187]: I1203 14:08:16.329879 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.330004 master-0 kubenswrapper[3187]: I1203 14:08:16.329943 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.330047 master-0 kubenswrapper[3187]: I1203 14:08:16.330007 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.330081 master-0 kubenswrapper[3187]: I1203 14:08:16.330041 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.330116 master-0 kubenswrapper[3187]: I1203 14:08:16.330096 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.330198 master-0 kubenswrapper[3187]: I1203 14:08:16.330168 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.330198 master-0 kubenswrapper[3187]: I1203 14:08:16.330188 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.330272 master-0 kubenswrapper[3187]: I1203 14:08:16.330211 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.330730 master-0 kubenswrapper[3187]: I1203 14:08:16.330685 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.330798 master-0 kubenswrapper[3187]: I1203 14:08:16.330777 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.330991 master-0 kubenswrapper[3187]: I1203 14:08:16.330896 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331047 master-0 kubenswrapper[3187]: I1203 14:08:16.330996 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331120 master-0 kubenswrapper[3187]: I1203 14:08:16.331056 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.331175 master-0 kubenswrapper[3187]: I1203 14:08:16.331128 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.331436 master-0 kubenswrapper[3187]: I1203 14:08:16.331387 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.331471 master-0 kubenswrapper[3187]: I1203 14:08:16.331447 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.331508 master-0 kubenswrapper[3187]: I1203 14:08:16.331476 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.331646 master-0 kubenswrapper[3187]: I1203 14:08:16.331605 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.331683 master-0 kubenswrapper[3187]: I1203 14:08:16.331653 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331801 master-0 kubenswrapper[3187]: I1203 14:08:16.331768 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331801 master-0 kubenswrapper[3187]: I1203 14:08:16.331781 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.331883 master-0 kubenswrapper[3187]: I1203 14:08:16.331825 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331883 master-0 kubenswrapper[3187]: I1203 14:08:16.331872 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.331946 master-0 kubenswrapper[3187]: I1203 14:08:16.331904 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.331946 master-0 kubenswrapper[3187]: I1203 14:08:16.331941 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.332006 master-0 kubenswrapper[3187]: I1203 14:08:16.331974 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332039 master-0 kubenswrapper[3187]: I1203 14:08:16.332008 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.332039 master-0 kubenswrapper[3187]: I1203 14:08:16.332030 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332096 master-0 kubenswrapper[3187]: I1203 14:08:16.332043 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332096 master-0 kubenswrapper[3187]: I1203 14:08:16.332074 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.332096 master-0 kubenswrapper[3187]: I1203 14:08:16.332083 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332052 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332117 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332124 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332142 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332094 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332194 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332216 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332283 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332319 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332350 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332328 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332372 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332436 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332443 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332527 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.332551 master-0 kubenswrapper[3187]: I1203 14:08:16.332551 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332580 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332715 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332764 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332816 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332878 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.332967 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.333031 master-0 kubenswrapper[3187]: I1203 14:08:16.333016 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.333216 master-0 kubenswrapper[3187]: I1203 14:08:16.333159 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.345891 master-0 kubenswrapper[3187]: E1203 14:08:16.345051 3187 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:16.345891 master-0 kubenswrapper[3187]: E1203 14:08:16.345097 3187 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.345891 master-0 kubenswrapper[3187]: E1203 14:08:16.345114 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.345891 master-0 kubenswrapper[3187]: E1203 14:08:16.345197 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:16.84516657 +0000 UTC m=+24.811702465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.345891 master-0 kubenswrapper[3187]: W1203 14:08:16.345896 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799e819f_f4b2_4ac9_8fa4_7d4da7a79285.slice/crio-0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083 WatchSource:0}: Error finding container 0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083: Status 404 returned error can't find the container with id 0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083 Dec 03 14:08:16.434609 master-0 kubenswrapper[3187]: I1203 14:08:16.434554 3187 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:08:16.434900 master-0 kubenswrapper[3187]: I1203 14:08:16.434640 3187 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:08:16.434900 master-0 kubenswrapper[3187]: I1203 14:08:16.434705 3187 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock" (OuterVolumeSpecName: "var-lock") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:16.434992 master-0 kubenswrapper[3187]: I1203 14:08:16.434904 3187 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:16.437296 master-0 kubenswrapper[3187]: I1203 14:08:16.437274 3187 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:16.437467 master-0 kubenswrapper[3187]: I1203 14:08:16.437300 3187 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:16.441449 master-0 kubenswrapper[3187]: I1203 14:08:16.441389 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:16.441641 master-0 kubenswrapper[3187]: I1203 14:08:16.441450 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.441641 master-0 kubenswrapper[3187]: I1203 14:08:16.441467 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.441641 master-0 kubenswrapper[3187]: I1203 14:08:16.441468 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.441641 master-0 kubenswrapper[3187]: I1203 14:08:16.441492 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: I1203 14:08:16.441388 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: E1203 14:08:16.441660 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: I1203 14:08:16.441709 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: I1203 14:08:16.441732 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: I1203 14:08:16.441737 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: E1203 14:08:16.441828 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: I1203 14:08:16.441850 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: E1203 14:08:16.442096 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: E1203 14:08:16.442152 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:16.442296 master-0 kubenswrapper[3187]: E1203 14:08:16.442220 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:16.443098 master-0 kubenswrapper[3187]: E1203 14:08:16.442323 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:16.443098 master-0 kubenswrapper[3187]: E1203 14:08:16.442383 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:16.443098 master-0 kubenswrapper[3187]: E1203 14:08:16.442513 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:16.443098 master-0 kubenswrapper[3187]: E1203 14:08:16.442599 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:16.443098 master-0 kubenswrapper[3187]: E1203 14:08:16.442704 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:16.552138 master-0 kubenswrapper[3187]: I1203 14:08:16.552073 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:16.562127 master-0 kubenswrapper[3187]: E1203 14:08:16.562094 3187 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:16.562127 master-0 kubenswrapper[3187]: E1203 14:08:16.562126 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.562258 master-0 kubenswrapper[3187]: E1203 14:08:16.562142 3187 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.562258 master-0 kubenswrapper[3187]: E1203 14:08:16.562134 3187 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.562258 master-0 kubenswrapper[3187]: E1203 14:08:16.562203 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.562258 master-0 kubenswrapper[3187]: E1203 14:08:16.562206 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.062184077 +0000 UTC m=+25.028719972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.562392 master-0 kubenswrapper[3187]: E1203 14:08:16.562316 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.06227829 +0000 UTC m=+25.028814355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.562874 master-0 kubenswrapper[3187]: E1203 14:08:16.562845 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:16.562874 master-0 kubenswrapper[3187]: E1203 14:08:16.562869 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.562974 master-0 kubenswrapper[3187]: E1203 14:08:16.562882 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.562974 master-0 kubenswrapper[3187]: E1203 14:08:16.562926 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.062915448 +0000 UTC m=+25.029451523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.565775 master-0 kubenswrapper[3187]: E1203 14:08:16.565747 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:16.565775 master-0 kubenswrapper[3187]: E1203 14:08:16.565768 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.566003 master-0 kubenswrapper[3187]: E1203 14:08:16.565781 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.566003 master-0 kubenswrapper[3187]: E1203 14:08:16.565822 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.06580859 +0000 UTC m=+25.032344665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.567997 master-0 kubenswrapper[3187]: E1203 14:08:16.567970 3187 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.567997 master-0 kubenswrapper[3187]: E1203 14:08:16.567994 3187 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.568091 master-0 kubenswrapper[3187]: E1203 14:08:16.568005 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.568091 master-0 kubenswrapper[3187]: E1203 14:08:16.568050 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.068038804 +0000 UTC m=+25.034574899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.570001 master-0 kubenswrapper[3187]: I1203 14:08:16.569971 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.571134 master-0 kubenswrapper[3187]: E1203 14:08:16.571094 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.571134 master-0 kubenswrapper[3187]: E1203 14:08:16.571125 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.571134 master-0 kubenswrapper[3187]: E1203 14:08:16.571138 3187 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.571240 master-0 kubenswrapper[3187]: E1203 14:08:16.571189 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.071176003 +0000 UTC m=+25.037711898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.571507 master-0 kubenswrapper[3187]: I1203 14:08:16.571479 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.575943 master-0 kubenswrapper[3187]: I1203 14:08:16.575911 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.575943 master-0 kubenswrapper[3187]: I1203 14:08:16.575915 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"3bedc5e58d7f0ce7e0557174208e5c4d17bd8d207f3a554e12ec072b39154b4a"} Dec 03 14:08:16.576121 master-0 kubenswrapper[3187]: I1203 14:08:16.575966 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718"} Dec 03 14:08:16.577453 master-0 kubenswrapper[3187]: I1203 14:08:16.577390 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160"} Dec 03 14:08:16.577453 master-0 kubenswrapper[3187]: I1203 14:08:16.577447 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083"} Dec 03 14:08:16.578721 master-0 kubenswrapper[3187]: I1203 14:08:16.578669 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.579243 master-0 kubenswrapper[3187]: I1203 14:08:16.579143 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:16.579243 master-0 kubenswrapper[3187]: I1203 14:08:16.579206 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5"} Dec 03 14:08:16.580269 master-0 kubenswrapper[3187]: I1203 14:08:16.580209 3187 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d18858c-0fbc-4593-9aef-03b4a97f066d" Dec 03 14:08:16.580269 master-0 kubenswrapper[3187]: I1203 14:08:16.580263 3187 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="9d18858c-0fbc-4593-9aef-03b4a97f066d" Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.602712 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.602746 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.602764 3187 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.602935 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.102863105 +0000 UTC m=+25.069399000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.603606 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.603640 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.603654 3187 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: E1203 14:08:16.603738 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.103715889 +0000 UTC m=+25.070251784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: I1203 14:08:16.606470 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:16.611786 master-0 kubenswrapper[3187]: I1203 14:08:16.609109 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.618187 master-0 kubenswrapper[3187]: E1203 14:08:16.618113 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.618187 master-0 kubenswrapper[3187]: E1203 14:08:16.618172 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.618187 master-0 kubenswrapper[3187]: E1203 14:08:16.618189 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.618456 master-0 kubenswrapper[3187]: E1203 14:08:16.618275 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.118249133 +0000 UTC m=+25.084785028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.620395 master-0 kubenswrapper[3187]: W1203 14:08:16.620330 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15782f65_35d2_4e95_bf49_81541c683ffe.slice/crio-182ad3d83d99bd0f44b84845199638371d2d4ad58dfb1451dee40ac6bfba5ae8 WatchSource:0}: Error finding container 182ad3d83d99bd0f44b84845199638371d2d4ad58dfb1451dee40ac6bfba5ae8: Status 404 returned error can't find the container with id 182ad3d83d99bd0f44b84845199638371d2d4ad58dfb1451dee40ac6bfba5ae8 Dec 03 14:08:16.640607 master-0 kubenswrapper[3187]: E1203 14:08:16.640539 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:16.640607 master-0 kubenswrapper[3187]: E1203 14:08:16.640598 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.640607 master-0 kubenswrapper[3187]: E1203 14:08:16.640614 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.640880 master-0 kubenswrapper[3187]: E1203 14:08:16.640692 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.140668731 +0000 UTC m=+25.107204626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.646550 master-0 kubenswrapper[3187]: I1203 14:08:16.646367 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:16.659506 master-0 kubenswrapper[3187]: I1203 14:08:16.659035 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:16.665783 master-0 kubenswrapper[3187]: W1203 14:08:16.665717 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b681889_eb2c_41fb_a1dc_69b99227b45b.slice/crio-d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875 WatchSource:0}: Error finding container d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875: Status 404 returned error can't find the container with id d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875 Dec 03 14:08:16.685381 master-0 kubenswrapper[3187]: W1203 14:08:16.685120 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb71ac8a5_987d_4eba_8bc0_a091f0a0de16.slice/crio-e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467 WatchSource:0}: Error finding container e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467: Status 404 returned error can't find the container with id e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467 Dec 03 14:08:16.697886 master-0 kubenswrapper[3187]: E1203 14:08:16.697586 3187 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:16.697886 master-0 kubenswrapper[3187]: E1203 14:08:16.697630 3187 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.697886 master-0 kubenswrapper[3187]: E1203 14:08:16.697651 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.697886 master-0 kubenswrapper[3187]: E1203 14:08:16.697747 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.197715615 +0000 UTC m=+25.164251510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.701844 master-0 kubenswrapper[3187]: E1203 14:08:16.701673 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:16.701844 master-0 kubenswrapper[3187]: E1203 14:08:16.701714 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.701844 master-0 kubenswrapper[3187]: E1203 14:08:16.701735 3187 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.701844 master-0 kubenswrapper[3187]: E1203 14:08:16.701812 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.2017864 +0000 UTC m=+25.168322295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.704120 master-0 kubenswrapper[3187]: I1203 14:08:16.703015 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.723992 master-0 kubenswrapper[3187]: E1203 14:08:16.723929 3187 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.723992 master-0 kubenswrapper[3187]: E1203 14:08:16.723969 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.724466 master-0 kubenswrapper[3187]: E1203 14:08:16.724035 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.224014123 +0000 UTC m=+25.190550018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.744820 master-0 kubenswrapper[3187]: E1203 14:08:16.744764 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:16.744820 master-0 kubenswrapper[3187]: E1203 14:08:16.744813 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.744820 master-0 kubenswrapper[3187]: E1203 14:08:16.744835 3187 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: E1203 14:08:16.744924 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.244898258 +0000 UTC m=+25.211434153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756315 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756393 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756456 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756487 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756537 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756570 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756628 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756655 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756725 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756772 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756814 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756871 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.756931 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757076 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757124 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757201 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757232 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757256 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757322 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757367 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757394 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757458 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757485 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757537 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757572 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757620 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757647 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757705 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757804 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757835 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757883 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.757913 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.758034 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.758108 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:16.759004 master-0 kubenswrapper[3187]: I1203 14:08:16.758155 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758215 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758287 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758317 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758366 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758841 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758911 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758940 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.758986 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759024 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759077 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759107 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759165 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759228 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759267 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759326 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759352 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759398 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759464 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759505 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759555 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759582 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759608 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759632 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759725 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759759 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759819 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759846 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759870 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759919 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759949 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.759996 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760038 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760062 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760098 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760126 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760181 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:16.760720 master-0 kubenswrapper[3187]: I1203 14:08:16.760218 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760244 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760293 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760332 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760355 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760400 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760446 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760473 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760498 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760527 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760582 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760699 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760729 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760791 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760820 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760844 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760871 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760909 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.760945 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761007 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761035 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761091 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761139 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761164 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761203 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761230 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761266 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761292 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761319 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761343 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761368 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761404 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761446 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761475 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761505 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761540 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761567 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.761775 master-0 kubenswrapper[3187]: I1203 14:08:16.761592 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761620 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761645 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761670 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761692 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761717 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761742 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761784 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761816 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761842 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761868 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761941 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.761982 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762017 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762043 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762067 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762093 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762123 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762148 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762203 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762231 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762260 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.762285 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764515 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764554 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764585 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764613 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764641 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764697 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764736 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764763 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764787 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764814 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: I1203 14:08:16.764840 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: E1203 14:08:16.764856 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: E1203 14:08:16.764890 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: E1203 14:08:16.764905 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: E1203 14:08:16.764971 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.264952048 +0000 UTC m=+25.231487943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.768235 master-0 kubenswrapper[3187]: E1203 14:08:16.764974 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.764866 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765037 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76502093 +0000 UTC m=+25.731556995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765065 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765093 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765138 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765163 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765190 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765216 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765278 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765315 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765355 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765379 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765408 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765473 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765513 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765539 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765567 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765591 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765619 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765645 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: I1203 14:08:16.765671 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765672 3187 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765787 3187 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765801 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765792432 +0000 UTC m=+25.732328327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765738 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765832 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765821283 +0000 UTC m=+25.732357368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765852 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765844264 +0000 UTC m=+25.732380159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765887 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765901 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765911 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765904485 +0000 UTC m=+25.732440380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765940 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765932 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765922646 +0000 UTC m=+25.732458541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.765980 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.765969537 +0000 UTC m=+25.732505682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.766017 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.766033 3187 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:16.769858 master-0 kubenswrapper[3187]: E1203 14:08:16.766043 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766035369 +0000 UTC m=+25.732571274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766059 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76605332 +0000 UTC m=+25.732589215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766073 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766089 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766111 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766094061 +0000 UTC m=+25.732629956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766131 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766120492 +0000 UTC m=+25.732656607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766136 3187 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766156 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766150882 +0000 UTC m=+25.732686777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766163 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766191 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766184283 +0000 UTC m=+25.732720178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766190 3187 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766221 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766215114 +0000 UTC m=+25.732751239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766242 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766267 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766271 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766263016 +0000 UTC m=+25.732799111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766301 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766303 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766295567 +0000 UTC m=+25.732831462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766337 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766328898 +0000 UTC m=+25.732865023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766347 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766367 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766376 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766368209 +0000 UTC m=+25.732904114 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766392 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766385099 +0000 UTC m=+25.732920994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766440 3187 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766451 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766466 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766458941 +0000 UTC m=+25.732995016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766484 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766476152 +0000 UTC m=+25.733012237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766508 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766531 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766554 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766542264 +0000 UTC m=+25.733078329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766563 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766576 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766566074 +0000 UTC m=+25.733101969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766602 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766588025 +0000 UTC m=+25.733124160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766610 3187 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766620 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766625 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.771240 master-0 kubenswrapper[3187]: E1203 14:08:16.766653 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766644 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766634846 +0000 UTC m=+25.733170741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766679 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766670657 +0000 UTC m=+25.733206762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766697 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766689428 +0000 UTC m=+25.733225323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766715 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766705588 +0000 UTC m=+25.733241483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766717 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766723 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766730 3187 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766745 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766738949 +0000 UTC m=+25.733274844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766755 3187 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766761 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766752 +0000 UTC m=+25.733288075 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766798 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766789921 +0000 UTC m=+25.733325816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766803 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766811 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766804801 +0000 UTC m=+25.733340696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766820 3187 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766833 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766837 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766828372 +0000 UTC m=+25.733364467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766805 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766855 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766861 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766853492 +0000 UTC m=+25.733389597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766855 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766876 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766878 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766871063 +0000 UTC m=+25.733407188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766908 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766920 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766923 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766913 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766903894 +0000 UTC m=+25.733440009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766958 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766964 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766953395 +0000 UTC m=+25.733489520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766978 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.766980 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.766972326 +0000 UTC m=+25.733508451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.767006 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.767017 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767006607 +0000 UTC m=+25.733542702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.767030 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.767045 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:16.781565 master-0 kubenswrapper[3187]: E1203 14:08:16.767066 3187 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767074 3187 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767033 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767024507 +0000 UTC m=+25.733560612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767097 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767106 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767096109 +0000 UTC m=+25.733632194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767120 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76711379 +0000 UTC m=+25.733649915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767134 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767141 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767135 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76712856 +0000 UTC m=+25.733664665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767161 3187 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767177 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767191 3187 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767163 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767156201 +0000 UTC m=+25.733692096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767214 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767205662 +0000 UTC m=+25.733741757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767224 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767227 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767220663 +0000 UTC m=+25.733756758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767249 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767257 3187 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767265 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767253984 +0000 UTC m=+25.733790079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767284 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767276925 +0000 UTC m=+25.733813010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767299 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767292235 +0000 UTC m=+25.733828350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767310 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767315 3187 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767314 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767306835 +0000 UTC m=+25.733843030 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767345 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767348 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767362 3187 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767392 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767335716 +0000 UTC m=+25.733871611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767396 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767403 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767411 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767401758 +0000 UTC m=+25.733937653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767443 3187 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767457 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767446 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767436599 +0000 UTC m=+25.733972504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767478 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767489 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76747888 +0000 UTC m=+25.734014965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.782969 master-0 kubenswrapper[3187]: E1203 14:08:16.767482 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767503 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767496311 +0000 UTC m=+25.734032416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767511 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767518 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767511961 +0000 UTC m=+25.734048066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767510 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767534 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767527282 +0000 UTC m=+25.734063397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767539 3187 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767549 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767542392 +0000 UTC m=+25.734078507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767560 3187 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767569 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.766680 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767563 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767558593 +0000 UTC m=+25.734094488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767603 3187 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767612 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767602074 +0000 UTC m=+25.734138159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767607 3187 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767628 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767620554 +0000 UTC m=+25.734156659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.766679 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767644 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767636495 +0000 UTC m=+25.734172590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767638 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767659 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767652035 +0000 UTC m=+25.734188150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767653 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767672 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767666096 +0000 UTC m=+25.734202211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767691 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767229 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767707 3187 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767695 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767318 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767735 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.766780 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767702 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767692446 +0000 UTC m=+25.734228551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767759 3187 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767765 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767755688 +0000 UTC m=+25.734291793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767782 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767774379 +0000 UTC m=+25.734310474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767792 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767251 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767797 3187 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767798 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767790619 +0000 UTC m=+25.734326734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:16.784209 master-0 kubenswrapper[3187]: E1203 14:08:16.767822 3187 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767836 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76782791 +0000 UTC m=+25.734364005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.766532 3187 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767853 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767843871 +0000 UTC m=+25.734379976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767285 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767863 3187 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767870 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767859991 +0000 UTC m=+25.734396066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.766968 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767886 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767879592 +0000 UTC m=+25.734415727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767890 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767898 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767900 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767893792 +0000 UTC m=+25.734429917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767938 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767859 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767954 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.766646 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767940 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767923473 +0000 UTC m=+25.734459368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767966 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767991 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767981845 +0000 UTC m=+25.734517910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767996 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768006 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.767999545 +0000 UTC m=+25.734535660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768013 3187 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768023 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768015956 +0000 UTC m=+25.734552081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767836 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768037 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768030646 +0000 UTC m=+25.734566751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768044 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768048 3187 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768051 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768044836 +0000 UTC m=+25.734580921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767660 3187 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768067 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768060447 +0000 UTC m=+25.734596572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.767614 3187 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768086 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768077717 +0000 UTC m=+25.734613832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768096 3187 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768098 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768101 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768094018 +0000 UTC m=+25.734630133 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768155 3187 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768174 3187 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768183 3187 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:16.786775 master-0 kubenswrapper[3187]: E1203 14:08:16.768191 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768176 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76816454 +0000 UTC m=+25.734700615 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768222 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768231 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768220681 +0000 UTC m=+25.734756776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768224 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768247 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768247 3187 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768247 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768240052 +0000 UTC m=+25.734776157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768292 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768295 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768297 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768287703 +0000 UTC m=+25.734823788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768332 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768322814 +0000 UTC m=+25.734858899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768348 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768356 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768362 3187 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768357 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768347795 +0000 UTC m=+25.734883870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768389 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768394 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768383866 +0000 UTC m=+25.734919971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768453 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768440718 +0000 UTC m=+25.734976793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768466 3187 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768476 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768468448 +0000 UTC m=+25.735004343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768491 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768484709 +0000 UTC m=+25.735020614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768506 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768524 3187 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768536 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768560 3187 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768573 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768588 3187 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768622 3187 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768636 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768650 3187 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768658 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768676 3187 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768688 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768705 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768736 3187 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768744 3187 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768757 3187 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768771 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768777 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768800 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768822 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768836 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768853 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768867 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768875 3187 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.767107 3187 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768707 3187 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768904 3187 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:16.792252 master-0 kubenswrapper[3187]: E1203 14:08:16.768822 3187 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768928 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768937 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768508 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768500379 +0000 UTC m=+25.735036484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768454 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768623 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768961 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768951252 +0000 UTC m=+25.735487327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768806 3187 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768976 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768968713 +0000 UTC m=+25.735504828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768332 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.767822 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768989 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.768982423 +0000 UTC m=+25.735518548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768051 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769011 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769002774 +0000 UTC m=+25.735538909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769016 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.766983 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.767018 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.767198 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.768297 3187 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769027 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769018744 +0000 UTC m=+25.735554639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769072 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769065625 +0000 UTC m=+25.735601520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769086 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769080946 +0000 UTC m=+25.735616841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769095 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769091066 +0000 UTC m=+25.735626961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769108 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769101046 +0000 UTC m=+25.735636941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769120 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769115287 +0000 UTC m=+25.735651182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769130 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769125397 +0000 UTC m=+25.735661292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769143 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769136748 +0000 UTC m=+25.735672643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769154 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769149208 +0000 UTC m=+25.735685103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769163 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769158588 +0000 UTC m=+25.735694483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769173 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769167928 +0000 UTC m=+25.735703823 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:16.794908 master-0 kubenswrapper[3187]: E1203 14:08:16.769184 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769177859 +0000 UTC m=+25.735713754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769194 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769188859 +0000 UTC m=+25.735724754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769205 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769200059 +0000 UTC m=+25.735735954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769214 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76921014 +0000 UTC m=+25.735746035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769226 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76921989 +0000 UTC m=+25.735755785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769237 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76923156 +0000 UTC m=+25.735767455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769248 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769242641 +0000 UTC m=+25.735778536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769258 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769252601 +0000 UTC m=+25.735788496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769272 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769265041 +0000 UTC m=+25.735800936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769286 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769280432 +0000 UTC m=+25.735816327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769298 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769291802 +0000 UTC m=+25.735827697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769310 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769303352 +0000 UTC m=+25.735839237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769321 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769315433 +0000 UTC m=+25.735851328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769331 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769325903 +0000 UTC m=+25.735861798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769342 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769336473 +0000 UTC m=+25.735872368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769352 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769347184 +0000 UTC m=+25.735883079 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769363 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769357554 +0000 UTC m=+25.735893449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769375 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769370214 +0000 UTC m=+25.735906109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769387 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769380644 +0000 UTC m=+25.735916529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769396 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769391985 +0000 UTC m=+25.735927880 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769407 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769402655 +0000 UTC m=+25.735938550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:16.796188 master-0 kubenswrapper[3187]: E1203 14:08:16.769613 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769411915 +0000 UTC m=+25.735947810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769632 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769625051 +0000 UTC m=+25.736160946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769650 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769641592 +0000 UTC m=+25.736177717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769686 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769657812 +0000 UTC m=+25.736193917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769698 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769693503 +0000 UTC m=+25.736229398 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769714 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769705684 +0000 UTC m=+25.736241789 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769726 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769720864 +0000 UTC m=+25.736256979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769740 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769732904 +0000 UTC m=+25.736269019 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769773 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769746755 +0000 UTC m=+25.736301480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769789 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769781746 +0000 UTC m=+25.736317861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769802 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769796006 +0000 UTC m=+25.736332151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769816 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769809907 +0000 UTC m=+25.736346002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769848 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769822277 +0000 UTC m=+25.736358402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769867 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769857908 +0000 UTC m=+25.736394013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769891 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769875438 +0000 UTC m=+25.736411524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769906 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769897779 +0000 UTC m=+25.736433894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769940 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.76993327 +0000 UTC m=+25.736469375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769952 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769946071 +0000 UTC m=+25.736482206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769966 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769959741 +0000 UTC m=+25.736495866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.769984 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.769974901 +0000 UTC m=+25.736511016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.770019 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770011912 +0000 UTC m=+25.736548017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.770039 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770031153 +0000 UTC m=+25.736567048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:16.797055 master-0 kubenswrapper[3187]: E1203 14:08:16.770054 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770046893 +0000 UTC m=+25.736583008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770091 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770064854 +0000 UTC m=+25.736600959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770106 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770099305 +0000 UTC m=+25.736635420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770122 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770114315 +0000 UTC m=+25.736650430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770136 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770130096 +0000 UTC m=+25.736666211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770173 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770144536 +0000 UTC m=+25.736680631 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770188 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770181537 +0000 UTC m=+25.736717642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770202 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770196238 +0000 UTC m=+25.736732343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770215 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770210248 +0000 UTC m=+25.736746353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770249 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770222938 +0000 UTC m=+25.736759053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: E1203 14:08:16.770266 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.770258549 +0000 UTC m=+25.736794664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:16.798122 master-0 kubenswrapper[3187]: I1203 14:08:16.795856 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:16.803186 master-0 kubenswrapper[3187]: E1203 14:08:16.803143 3187 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.803186 master-0 kubenswrapper[3187]: E1203 14:08:16.803183 3187 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.803186 master-0 kubenswrapper[3187]: E1203 14:08:16.803201 3187 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.803528 master-0 kubenswrapper[3187]: E1203 14:08:16.803291 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.303253699 +0000 UTC m=+25.269789594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.819309 master-0 kubenswrapper[3187]: E1203 14:08:16.819249 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:16.819309 master-0 kubenswrapper[3187]: E1203 14:08:16.819295 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.819492 master-0 kubenswrapper[3187]: E1203 14:08:16.819358 3187 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.819541 master-0 kubenswrapper[3187]: E1203 14:08:16.819493 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.31946627 +0000 UTC m=+25.286002165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.848498 master-0 kubenswrapper[3187]: I1203 14:08:16.848438 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:16.857967 master-0 kubenswrapper[3187]: I1203 14:08:16.857743 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:16.862526 master-0 kubenswrapper[3187]: E1203 14:08:16.862479 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:16.862526 master-0 kubenswrapper[3187]: E1203 14:08:16.862523 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.862687 master-0 kubenswrapper[3187]: E1203 14:08:16.862539 3187 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.862687 master-0 kubenswrapper[3187]: E1203 14:08:16.862612 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.362592457 +0000 UTC m=+25.329128342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.864356 master-0 kubenswrapper[3187]: E1203 14:08:16.864311 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:16.864356 master-0 kubenswrapper[3187]: E1203 14:08:16.864354 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.864461 master-0 kubenswrapper[3187]: E1203 14:08:16.864371 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.864516 master-0 kubenswrapper[3187]: E1203 14:08:16.864465 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.36444495 +0000 UTC m=+25.330980845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.867926 master-0 kubenswrapper[3187]: I1203 14:08:16.867852 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:16.868162 master-0 kubenswrapper[3187]: I1203 14:08:16.868042 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:16.868230 master-0 kubenswrapper[3187]: E1203 14:08:16.868206 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868233 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868248 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868306 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.86828588 +0000 UTC m=+25.834821775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868498 3187 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868529 3187 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868546 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.869832 master-0 kubenswrapper[3187]: E1203 14:08:16.868733 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.868706241 +0000 UTC m=+25.835242306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.891548 master-0 kubenswrapper[3187]: E1203 14:08:16.891507 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:16.891548 master-0 kubenswrapper[3187]: E1203 14:08:16.891547 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:16.891695 master-0 kubenswrapper[3187]: E1203 14:08:16.891566 3187 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:16.891695 master-0 kubenswrapper[3187]: E1203 14:08:16.891660 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.391632814 +0000 UTC m=+25.358168719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.087345 master-0 kubenswrapper[3187]: I1203 14:08:17.087282 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:17.088273 master-0 kubenswrapper[3187]: I1203 14:08:17.087495 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:17.088273 master-0 kubenswrapper[3187]: I1203 14:08:17.087531 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.088273 master-0 kubenswrapper[3187]: I1203 14:08:17.087573 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:17.088273 master-0 kubenswrapper[3187]: I1203 14:08:17.087714 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.088273 master-0 kubenswrapper[3187]: I1203 14:08:17.087791 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088815 3187 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088846 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088858 3187 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088905 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.088892738 +0000 UTC m=+26.055428633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088947 3187 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088957 3187 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088964 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.088984 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.088976711 +0000 UTC m=+26.055512596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089022 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089031 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089038 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089057 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.089052133 +0000 UTC m=+26.055588028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089094 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089104 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089110 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089126 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.089121575 +0000 UTC m=+26.055657460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089165 3187 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089173 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089191 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.089185187 +0000 UTC m=+26.055721082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089226 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089235 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089241 3187 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: E1203 14:08:17.089260 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.089253859 +0000 UTC m=+26.055789744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.089609 master-0 kubenswrapper[3187]: I1203 14:08:17.089539 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.090634 master-0 kubenswrapper[3187]: I1203 14:08:17.090608 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:17.092467 master-0 kubenswrapper[3187]: E1203 14:08:17.092236 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.092467 master-0 kubenswrapper[3187]: E1203 14:08:17.092270 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.092467 master-0 kubenswrapper[3187]: E1203 14:08:17.092286 3187 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.092467 master-0 kubenswrapper[3187]: E1203 14:08:17.092347 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.592329566 +0000 UTC m=+25.558865521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.093258 master-0 kubenswrapper[3187]: E1203 14:08:17.092513 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.093258 master-0 kubenswrapper[3187]: E1203 14:08:17.092555 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.093258 master-0 kubenswrapper[3187]: E1203 14:08:17.092572 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.093258 master-0 kubenswrapper[3187]: E1203 14:08:17.092656 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.592636065 +0000 UTC m=+25.559171960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.095353 master-0 kubenswrapper[3187]: I1203 14:08:17.095269 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:17.095837 master-0 kubenswrapper[3187]: I1203 14:08:17.095800 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:17.096247 master-0 kubenswrapper[3187]: E1203 14:08:17.096180 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096707 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096735 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096743 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096750 3187 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096763 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096827 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.596805754 +0000 UTC m=+25.563341649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.097131 master-0 kubenswrapper[3187]: E1203 14:08:17.096858 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.596843115 +0000 UTC m=+25.563379190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.098320 master-0 kubenswrapper[3187]: I1203 14:08:17.097284 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:17.105409 master-0 kubenswrapper[3187]: I1203 14:08:17.105235 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:17.105684 master-0 kubenswrapper[3187]: I1203 14:08:17.105388 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.146511 master-0 kubenswrapper[3187]: I1203 14:08:17.146373 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:17.149007 master-0 kubenswrapper[3187]: I1203 14:08:17.148961 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:17.178967 master-0 kubenswrapper[3187]: I1203 14:08:17.178887 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 14:08:17.178967 master-0 kubenswrapper[3187]: I1203 14:08:17.178919 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:17.184983 master-0 kubenswrapper[3187]: I1203 14:08:17.184930 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:17.190817 master-0 kubenswrapper[3187]: I1203 14:08:17.190691 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:17.191084 master-0 kubenswrapper[3187]: I1203 14:08:17.191057 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:17.191187 master-0 kubenswrapper[3187]: I1203 14:08:17.191123 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:17.191187 master-0 kubenswrapper[3187]: I1203 14:08:17.191152 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191738 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191775 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191789 3187 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191738 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191878 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191896 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191900 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191917 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191919 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191854 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.191831528 +0000 UTC m=+26.158367603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.191959 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.191948502 +0000 UTC m=+26.158484567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.192125 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.192110396 +0000 UTC m=+26.158646301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.192392 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.192458 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.192617 3187 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.193752 master-0 kubenswrapper[3187]: E1203 14:08:17.192701 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.192681053 +0000 UTC m=+26.159216938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.199805 master-0 kubenswrapper[3187]: I1203 14:08:17.199705 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:17.206356 master-0 kubenswrapper[3187]: I1203 14:08:17.206323 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:17.210602 master-0 kubenswrapper[3187]: I1203 14:08:17.210331 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.219747 master-0 kubenswrapper[3187]: W1203 14:08:17.219640 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ef37bba_85d9_4303_80c0_aac3dc49d3d9.slice/crio-99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266 WatchSource:0}: Error finding container 99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266: Status 404 returned error can't find the container with id 99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266 Dec 03 14:08:17.223889 master-0 kubenswrapper[3187]: I1203 14:08:17.223828 3187 request.go:700] Waited for 1.009191304s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token Dec 03 14:08:17.237934 master-0 kubenswrapper[3187]: E1203 14:08:17.237771 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:17.237934 master-0 kubenswrapper[3187]: E1203 14:08:17.237812 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.237934 master-0 kubenswrapper[3187]: E1203 14:08:17.237830 3187 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.237934 master-0 kubenswrapper[3187]: E1203 14:08:17.237904 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.737881429 +0000 UTC m=+25.704417324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.239104 master-0 kubenswrapper[3187]: E1203 14:08:17.239063 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.239104 master-0 kubenswrapper[3187]: E1203 14:08:17.239103 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.239209 master-0 kubenswrapper[3187]: E1203 14:08:17.239118 3187 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.239209 master-0 kubenswrapper[3187]: E1203 14:08:17.239191 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.739172746 +0000 UTC m=+25.705708631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.253270 master-0 kubenswrapper[3187]: I1203 14:08:17.253072 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:17.276545 master-0 kubenswrapper[3187]: W1203 14:08:17.275970 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeecc43f5_708f_4395_98cc_696b243d6321.slice/crio-6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5 WatchSource:0}: Error finding container 6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5: Status 404 returned error can't find the container with id 6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5 Dec 03 14:08:17.298300 master-0 kubenswrapper[3187]: I1203 14:08:17.298234 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:17.298401 master-0 kubenswrapper[3187]: I1203 14:08:17.298310 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:17.298515 master-0 kubenswrapper[3187]: E1203 14:08:17.298471 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:17.298515 master-0 kubenswrapper[3187]: E1203 14:08:17.298504 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.298515 master-0 kubenswrapper[3187]: E1203 14:08:17.298521 3187 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: I1203 14:08:17.298528 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: E1203 14:08:17.298585 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.298566677 +0000 UTC m=+26.265102572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: I1203 14:08:17.298614 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: E1203 14:08:17.298635 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: E1203 14:08:17.298653 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.298748 master-0 kubenswrapper[3187]: E1203 14:08:17.298663 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298730 3187 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298791 3187 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298812 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298819 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298836 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298845 3187 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: I1203 14:08:17.298766 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298746 3187 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298900 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298921 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.298858765 +0000 UTC m=+26.265394660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298980 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.298966828 +0000 UTC m=+26.265503013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299175 master-0 kubenswrapper[3187]: E1203 14:08:17.298999 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.298990079 +0000 UTC m=+26.265526184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.299527 master-0 kubenswrapper[3187]: E1203 14:08:17.299200 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.299171864 +0000 UTC m=+26.265707879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399479 3187 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399789 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399815 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399830 3187 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399905 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.89988338 +0000 UTC m=+25.866419275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399939 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399952 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.399995 3187 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400032 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.900021534 +0000 UTC m=+25.866557429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400320 3187 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400373 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400329 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400496 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400505 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.900469817 +0000 UTC m=+25.867005772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400539 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.400661 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.900630631 +0000 UTC m=+25.867166526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403104 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403164 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403198 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403243 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403346 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403340 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403367 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403378 3187 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403462 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403472 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403480 3187 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403534 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403542 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403548 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403572 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.403562225 +0000 UTC m=+26.370098120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403586 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.403580145 +0000 UTC m=+26.370116040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403631 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403649 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403659 3187 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403684 3187 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403710 3187 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: I1203 14:08:17.403696 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403725 3187 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.403829 master-0 kubenswrapper[3187]: E1203 14:08:17.403693 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.403682568 +0000 UTC m=+26.370218463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.409599 master-0 kubenswrapper[3187]: I1203 14:08:17.404058 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.409599 master-0 kubenswrapper[3187]: E1203 14:08:17.404130 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.40411083 +0000 UTC m=+26.370646905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.409599 master-0 kubenswrapper[3187]: E1203 14:08:17.404528 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.404511002 +0000 UTC m=+26.371046897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.424224 master-0 kubenswrapper[3187]: I1203 14:08:17.424134 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:17.429501 master-0 kubenswrapper[3187]: I1203 14:08:17.429287 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:17.441178 master-0 kubenswrapper[3187]: I1203 14:08:17.440584 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.441376 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: I1203 14:08:17.441592 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.441704 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: I1203 14:08:17.441773 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.441852 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: I1203 14:08:17.441903 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.442002 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: I1203 14:08:17.442057 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.442114 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: I1203 14:08:17.442160 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:17.442219 master-0 kubenswrapper[3187]: E1203 14:08:17.442214 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: I1203 14:08:17.442261 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: E1203 14:08:17.442326 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: I1203 14:08:17.442376 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: E1203 14:08:17.442507 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: I1203 14:08:17.442564 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: E1203 14:08:17.442696 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:17.442757 master-0 kubenswrapper[3187]: I1203 14:08:17.442733 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: E1203 14:08:17.442771 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: I1203 14:08:17.442824 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: E1203 14:08:17.442879 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: I1203 14:08:17.442917 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: E1203 14:08:17.442956 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: I1203 14:08:17.442992 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: E1203 14:08:17.443031 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: I1203 14:08:17.443068 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:17.443142 master-0 kubenswrapper[3187]: E1203 14:08:17.443134 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: I1203 14:08:17.443177 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: E1203 14:08:17.443221 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: I1203 14:08:17.443265 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: E1203 14:08:17.443332 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: I1203 14:08:17.443374 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: E1203 14:08:17.443445 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: I1203 14:08:17.443484 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:17.443516 master-0 kubenswrapper[3187]: E1203 14:08:17.443527 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: I1203 14:08:17.443566 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: E1203 14:08:17.443607 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: I1203 14:08:17.443639 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: E1203 14:08:17.443674 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: I1203 14:08:17.443709 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: E1203 14:08:17.443744 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:17.443805 master-0 kubenswrapper[3187]: I1203 14:08:17.443778 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:17.444110 master-0 kubenswrapper[3187]: E1203 14:08:17.443816 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:17.444110 master-0 kubenswrapper[3187]: I1203 14:08:17.443858 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:17.444110 master-0 kubenswrapper[3187]: E1203 14:08:17.443982 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:17.444110 master-0 kubenswrapper[3187]: I1203 14:08:17.444028 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.444270 master-0 kubenswrapper[3187]: E1203 14:08:17.444122 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:17.444270 master-0 kubenswrapper[3187]: I1203 14:08:17.444162 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.444270 master-0 kubenswrapper[3187]: E1203 14:08:17.444214 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:17.444270 master-0 kubenswrapper[3187]: I1203 14:08:17.444249 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:17.444443 master-0 kubenswrapper[3187]: E1203 14:08:17.444300 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:17.444443 master-0 kubenswrapper[3187]: I1203 14:08:17.444344 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:17.444443 master-0 kubenswrapper[3187]: E1203 14:08:17.444381 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:17.444619 master-0 kubenswrapper[3187]: I1203 14:08:17.444587 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:17.444683 master-0 kubenswrapper[3187]: E1203 14:08:17.444662 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:17.444737 master-0 kubenswrapper[3187]: I1203 14:08:17.444720 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:17.444774 master-0 kubenswrapper[3187]: E1203 14:08:17.444765 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:17.444825 master-0 kubenswrapper[3187]: I1203 14:08:17.444811 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:17.444892 master-0 kubenswrapper[3187]: E1203 14:08:17.444875 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:17.444941 master-0 kubenswrapper[3187]: I1203 14:08:17.444933 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.445090 master-0 kubenswrapper[3187]: E1203 14:08:17.445062 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:17.445134 master-0 kubenswrapper[3187]: I1203 14:08:17.445102 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:17.445172 master-0 kubenswrapper[3187]: E1203 14:08:17.445139 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:17.445217 master-0 kubenswrapper[3187]: I1203 14:08:17.445177 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:17.445252 master-0 kubenswrapper[3187]: E1203 14:08:17.445241 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:17.445290 master-0 kubenswrapper[3187]: I1203 14:08:17.445283 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:17.445363 master-0 kubenswrapper[3187]: E1203 14:08:17.445333 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:17.445403 master-0 kubenswrapper[3187]: I1203 14:08:17.445387 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:17.445535 master-0 kubenswrapper[3187]: E1203 14:08:17.445439 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:17.445535 master-0 kubenswrapper[3187]: I1203 14:08:17.445481 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:17.445599 master-0 kubenswrapper[3187]: E1203 14:08:17.445543 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:17.445599 master-0 kubenswrapper[3187]: I1203 14:08:17.445578 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:17.445666 master-0 kubenswrapper[3187]: E1203 14:08:17.445624 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:17.445666 master-0 kubenswrapper[3187]: I1203 14:08:17.445660 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:17.445731 master-0 kubenswrapper[3187]: I1203 14:08:17.445680 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:17.445731 master-0 kubenswrapper[3187]: I1203 14:08:17.445728 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:17.445835 master-0 kubenswrapper[3187]: I1203 14:08:17.445819 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.445867 master-0 kubenswrapper[3187]: E1203 14:08:17.445821 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:17.445867 master-0 kubenswrapper[3187]: I1203 14:08:17.445857 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:17.445930 master-0 kubenswrapper[3187]: I1203 14:08:17.445895 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:17.445930 master-0 kubenswrapper[3187]: I1203 14:08:17.445906 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.445930 master-0 kubenswrapper[3187]: I1203 14:08:17.445924 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:17.446006 master-0 kubenswrapper[3187]: E1203 14:08:17.445698 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:17.446049 master-0 kubenswrapper[3187]: E1203 14:08:17.446030 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:17.446049 master-0 kubenswrapper[3187]: I1203 14:08:17.446041 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:17.446112 master-0 kubenswrapper[3187]: I1203 14:08:17.446057 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:17.446112 master-0 kubenswrapper[3187]: I1203 14:08:17.446077 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.446112 master-0 kubenswrapper[3187]: I1203 14:08:17.446083 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:17.446112 master-0 kubenswrapper[3187]: I1203 14:08:17.446102 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:17.446112 master-0 kubenswrapper[3187]: I1203 14:08:17.446109 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:17.446255 master-0 kubenswrapper[3187]: E1203 14:08:17.446160 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:17.446255 master-0 kubenswrapper[3187]: I1203 14:08:17.446191 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:17.446255 master-0 kubenswrapper[3187]: I1203 14:08:17.446213 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:17.446336 master-0 kubenswrapper[3187]: E1203 14:08:17.446288 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:17.446336 master-0 kubenswrapper[3187]: I1203 14:08:17.446327 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:17.446389 master-0 kubenswrapper[3187]: I1203 14:08:17.446362 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:17.446389 master-0 kubenswrapper[3187]: E1203 14:08:17.446383 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:17.446522 master-0 kubenswrapper[3187]: E1203 14:08:17.446413 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:17.446563 master-0 kubenswrapper[3187]: E1203 14:08:17.446549 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:17.446634 master-0 kubenswrapper[3187]: E1203 14:08:17.446614 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:17.446689 master-0 kubenswrapper[3187]: E1203 14:08:17.446673 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:17.446741 master-0 kubenswrapper[3187]: E1203 14:08:17.446725 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:17.446792 master-0 kubenswrapper[3187]: E1203 14:08:17.446776 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:17.446862 master-0 kubenswrapper[3187]: E1203 14:08:17.446845 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:17.447195 master-0 kubenswrapper[3187]: E1203 14:08:17.447121 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:17.447321 master-0 kubenswrapper[3187]: E1203 14:08:17.447296 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:17.447476 master-0 kubenswrapper[3187]: E1203 14:08:17.447450 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:17.447561 master-0 kubenswrapper[3187]: E1203 14:08:17.447539 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:17.447659 master-0 kubenswrapper[3187]: E1203 14:08:17.447638 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:17.469366 master-0 kubenswrapper[3187]: E1203 14:08:17.469299 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.469366 master-0 kubenswrapper[3187]: E1203 14:08:17.469345 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.469366 master-0 kubenswrapper[3187]: E1203 14:08:17.469360 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.469698 master-0 kubenswrapper[3187]: E1203 14:08:17.469451 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:17.969408309 +0000 UTC m=+25.935944204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.475682 master-0 kubenswrapper[3187]: I1203 14:08:17.475577 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:17.492286 master-0 kubenswrapper[3187]: I1203 14:08:17.492203 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:17.539136 master-0 kubenswrapper[3187]: E1203 14:08:17.539021 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:17.539136 master-0 kubenswrapper[3187]: E1203 14:08:17.539068 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.539136 master-0 kubenswrapper[3187]: E1203 14:08:17.539087 3187 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.539587 master-0 kubenswrapper[3187]: E1203 14:08:17.539190 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.039161304 +0000 UTC m=+26.005697199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.539587 master-0 kubenswrapper[3187]: I1203 14:08:17.539313 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:17.542978 master-0 kubenswrapper[3187]: I1203 14:08:17.542930 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:17.543095 master-0 kubenswrapper[3187]: I1203 14:08:17.543059 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:17.543213 master-0 kubenswrapper[3187]: I1203 14:08:17.543155 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:17.565175 master-0 kubenswrapper[3187]: I1203 14:08:17.564737 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:17.583644 master-0 kubenswrapper[3187]: I1203 14:08:17.583533 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8"} Dec 03 14:08:17.584704 master-0 kubenswrapper[3187]: I1203 14:08:17.584608 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8"} Dec 03 14:08:17.585522 master-0 kubenswrapper[3187]: I1203 14:08:17.585489 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266"} Dec 03 14:08:17.586396 master-0 kubenswrapper[3187]: I1203 14:08:17.586363 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3"} Dec 03 14:08:17.588177 master-0 kubenswrapper[3187]: I1203 14:08:17.588097 3187 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="93043e69ade19a76194367d0e479728ca1d60e88105dc3caf6f3be29dbabbc6a" exitCode=0 Dec 03 14:08:17.588366 master-0 kubenswrapper[3187]: I1203 14:08:17.588202 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"93043e69ade19a76194367d0e479728ca1d60e88105dc3caf6f3be29dbabbc6a"} Dec 03 14:08:17.589816 master-0 kubenswrapper[3187]: I1203 14:08:17.589776 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"9f860e51c2b1bff360e9163c1ad85b178bb2087ce4f03e4872bdb65a47719469"} Dec 03 14:08:17.589816 master-0 kubenswrapper[3187]: I1203 14:08:17.589814 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8"} Dec 03 14:08:17.591956 master-0 kubenswrapper[3187]: I1203 14:08:17.591879 3187 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="574a1200efdf4a517bb40025bec2aff5c6d2270f8ea9365cef5bff5b426b3524" exitCode=0 Dec 03 14:08:17.592023 master-0 kubenswrapper[3187]: I1203 14:08:17.591970 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"574a1200efdf4a517bb40025bec2aff5c6d2270f8ea9365cef5bff5b426b3524"} Dec 03 14:08:17.592023 master-0 kubenswrapper[3187]: I1203 14:08:17.592007 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467"} Dec 03 14:08:17.593378 master-0 kubenswrapper[3187]: I1203 14:08:17.593330 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5"} Dec 03 14:08:17.597605 master-0 kubenswrapper[3187]: I1203 14:08:17.597554 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"38d3b50b66712e81beadff3b6029073280e3d8729325473fba1be3f14896eace"} Dec 03 14:08:17.597605 master-0 kubenswrapper[3187]: I1203 14:08:17.597599 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875"} Dec 03 14:08:17.599627 master-0 kubenswrapper[3187]: I1203 14:08:17.599580 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"b3ca94a978df3ec904f71631e19f006c6a70b016095b3c0dca3c4a7f1e79fe33"} Dec 03 14:08:17.604282 master-0 kubenswrapper[3187]: I1203 14:08:17.604214 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"1ea58079a17446175428ce19e1a24d93e0d3ae912a147544f1f2147a7d2e42ea"} Dec 03 14:08:17.604282 master-0 kubenswrapper[3187]: I1203 14:08:17.604281 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d"} Dec 03 14:08:17.605267 master-0 kubenswrapper[3187]: I1203 14:08:17.605217 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00"} Dec 03 14:08:17.608157 master-0 kubenswrapper[3187]: I1203 14:08:17.608018 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"299d26524a258ac4b1ac8e668721ff80bef55b23c5476971212ae6fdff6d1ef3"} Dec 03 14:08:17.608157 master-0 kubenswrapper[3187]: I1203 14:08:17.608081 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"182ad3d83d99bd0f44b84845199638371d2d4ad58dfb1451dee40ac6bfba5ae8"} Dec 03 14:08:17.614583 master-0 kubenswrapper[3187]: E1203 14:08:17.613902 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.614583 master-0 kubenswrapper[3187]: E1203 14:08:17.613949 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.614583 master-0 kubenswrapper[3187]: E1203 14:08:17.613968 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.614583 master-0 kubenswrapper[3187]: E1203 14:08:17.614058 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.114030885 +0000 UTC m=+26.080566780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.616112 master-0 kubenswrapper[3187]: I1203 14:08:17.616064 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:17.616630 master-0 kubenswrapper[3187]: I1203 14:08:17.616592 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:17.616708 master-0 kubenswrapper[3187]: I1203 14:08:17.616668 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:17.616927 master-0 kubenswrapper[3187]: E1203 14:08:17.616811 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.616927 master-0 kubenswrapper[3187]: E1203 14:08:17.616862 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.616927 master-0 kubenswrapper[3187]: I1203 14:08:17.616871 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:17.616927 master-0 kubenswrapper[3187]: I1203 14:08:17.616923 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:17.617146 master-0 kubenswrapper[3187]: E1203 14:08:17.616879 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.617355 master-0 kubenswrapper[3187]: E1203 14:08:17.617009 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:17.617521 master-0 kubenswrapper[3187]: E1203 14:08:17.617036 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.6170111 +0000 UTC m=+26.583546995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.617521 master-0 kubenswrapper[3187]: E1203 14:08:17.616940 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617496 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617507 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617664 3187 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617134 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617719 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617731 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617819 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.617793172 +0000 UTC m=+26.584329077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617851 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.617841884 +0000 UTC m=+26.584377779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.617994 3187 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.618160 master-0 kubenswrapper[3187]: E1203 14:08:17.618050 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.618037619 +0000 UTC m=+26.584573514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.632579 master-0 kubenswrapper[3187]: E1203 14:08:17.632504 3187 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:17.632579 master-0 kubenswrapper[3187]: E1203 14:08:17.632552 3187 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.632579 master-0 kubenswrapper[3187]: E1203 14:08:17.632570 3187 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.632951 master-0 kubenswrapper[3187]: E1203 14:08:17.632658 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.132638595 +0000 UTC m=+26.099174490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.663673 master-0 kubenswrapper[3187]: E1203 14:08:17.663459 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:17.663673 master-0 kubenswrapper[3187]: E1203 14:08:17.663510 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.663673 master-0 kubenswrapper[3187]: E1203 14:08:17.663527 3187 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.663673 master-0 kubenswrapper[3187]: E1203 14:08:17.663628 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.163604636 +0000 UTC m=+26.130140541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.665133 master-0 kubenswrapper[3187]: E1203 14:08:17.664981 3187 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:17.665133 master-0 kubenswrapper[3187]: E1203 14:08:17.665020 3187 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.665133 master-0 kubenswrapper[3187]: E1203 14:08:17.665036 3187 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.665133 master-0 kubenswrapper[3187]: E1203 14:08:17.665105 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.165085778 +0000 UTC m=+26.131621713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.705166 master-0 kubenswrapper[3187]: E1203 14:08:17.704924 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:17.705166 master-0 kubenswrapper[3187]: E1203 14:08:17.704980 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.705166 master-0 kubenswrapper[3187]: E1203 14:08:17.704997 3187 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.705166 master-0 kubenswrapper[3187]: E1203 14:08:17.705089 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.205065216 +0000 UTC m=+26.171601111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.708246 master-0 kubenswrapper[3187]: E1203 14:08:17.708093 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.708246 master-0 kubenswrapper[3187]: E1203 14:08:17.708138 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.708246 master-0 kubenswrapper[3187]: E1203 14:08:17.708155 3187 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.708525 master-0 kubenswrapper[3187]: E1203 14:08:17.708256 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.208232537 +0000 UTC m=+26.174768612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.723071 master-0 kubenswrapper[3187]: E1203 14:08:17.720442 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:17.723071 master-0 kubenswrapper[3187]: E1203 14:08:17.720496 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.723071 master-0 kubenswrapper[3187]: E1203 14:08:17.720511 3187 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.723071 master-0 kubenswrapper[3187]: E1203 14:08:17.720610 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.220586868 +0000 UTC m=+26.187122763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.740808 master-0 kubenswrapper[3187]: E1203 14:08:17.740739 3187 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.740808 master-0 kubenswrapper[3187]: E1203 14:08:17.740792 3187 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.740808 master-0 kubenswrapper[3187]: E1203 14:08:17.740814 3187 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.741001 master-0 kubenswrapper[3187]: E1203 14:08:17.740915 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.240885946 +0000 UTC m=+26.207421851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.781601 master-0 kubenswrapper[3187]: E1203 14:08:17.781458 3187 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.781601 master-0 kubenswrapper[3187]: E1203 14:08:17.781574 3187 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.781601 master-0 kubenswrapper[3187]: E1203 14:08:17.781595 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.781874 master-0 kubenswrapper[3187]: E1203 14:08:17.781663 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.781874 master-0 kubenswrapper[3187]: E1203 14:08:17.781698 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.781874 master-0 kubenswrapper[3187]: E1203 14:08:17.781720 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.281697177 +0000 UTC m=+26.248233072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.781874 master-0 kubenswrapper[3187]: E1203 14:08:17.781725 3187 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.781874 master-0 kubenswrapper[3187]: E1203 14:08:17.781826 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.28180113 +0000 UTC m=+26.248337025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.814049 master-0 kubenswrapper[3187]: I1203 14:08:17.813973 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:17.819467 master-0 kubenswrapper[3187]: I1203 14:08:17.819397 3187 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:17.819998 master-0 kubenswrapper[3187]: E1203 14:08:17.819888 3187 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.820108 master-0 kubenswrapper[3187]: E1203 14:08:17.820063 3187 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.820145 master-0 kubenswrapper[3187]: E1203 14:08:17.820111 3187 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.820317 master-0 kubenswrapper[3187]: E1203 14:08:17.820286 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.320242415 +0000 UTC m=+26.286778310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.820825 master-0 kubenswrapper[3187]: I1203 14:08:17.820779 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:17.822720 master-0 kubenswrapper[3187]: E1203 14:08:17.822688 3187 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.822720 master-0 kubenswrapper[3187]: E1203 14:08:17.822718 3187 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.822816 master-0 kubenswrapper[3187]: E1203 14:08:17.822734 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.822925 master-0 kubenswrapper[3187]: E1203 14:08:17.822880 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.322848829 +0000 UTC m=+26.289384724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.826045 master-0 kubenswrapper[3187]: I1203 14:08:17.826004 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:17.826256 master-0 kubenswrapper[3187]: I1203 14:08:17.826158 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:17.826381 master-0 kubenswrapper[3187]: I1203 14:08:17.826350 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:17.826548 master-0 kubenswrapper[3187]: I1203 14:08:17.826463 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:17.826616 master-0 kubenswrapper[3187]: I1203 14:08:17.826567 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:17.826677 master-0 kubenswrapper[3187]: I1203 14:08:17.826640 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:17.826723 master-0 kubenswrapper[3187]: I1203 14:08:17.826689 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:17.826756 master-0 kubenswrapper[3187]: I1203 14:08:17.826738 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.826913 master-0 kubenswrapper[3187]: E1203 14:08:17.826886 3187 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:17.826951 master-0 kubenswrapper[3187]: E1203 14:08:17.826944 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.826929955 +0000 UTC m=+27.793465850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:17.827144 master-0 kubenswrapper[3187]: E1203 14:08:17.827086 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:17.827144 master-0 kubenswrapper[3187]: E1203 14:08:17.827106 3187 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:17.827282 master-0 kubenswrapper[3187]: E1203 14:08:17.827179 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:17.827282 master-0 kubenswrapper[3187]: E1203 14:08:17.827252 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:17.827356 master-0 kubenswrapper[3187]: E1203 14:08:17.827336 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827163932 +0000 UTC m=+27.793699977 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:17.827356 master-0 kubenswrapper[3187]: E1203 14:08:17.827353 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:17.827473 master-0 kubenswrapper[3187]: E1203 14:08:17.827348 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:17.827525 master-0 kubenswrapper[3187]: E1203 14:08:17.827515 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:17.827609 master-0 kubenswrapper[3187]: E1203 14:08:17.827585 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827536952 +0000 UTC m=+27.794073017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:17.827884 master-0 kubenswrapper[3187]: E1203 14:08:17.827836 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827610774 +0000 UTC m=+27.794146919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:17.827937 master-0 kubenswrapper[3187]: E1203 14:08:17.827926 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827911183 +0000 UTC m=+27.794447278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:17.827979 master-0 kubenswrapper[3187]: E1203 14:08:17.827944 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827937744 +0000 UTC m=+27.794473739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:17.828020 master-0 kubenswrapper[3187]: E1203 14:08:17.828002 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.827988295 +0000 UTC m=+27.794524190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:17.828059 master-0 kubenswrapper[3187]: E1203 14:08:17.828021 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.828011316 +0000 UTC m=+27.794547671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:17.828106 master-0 kubenswrapper[3187]: I1203 14:08:17.828073 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:17.828746 master-0 kubenswrapper[3187]: E1203 14:08:17.827052 3187 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:17.828811 master-0 kubenswrapper[3187]: E1203 14:08:17.828798 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.828784898 +0000 UTC m=+27.795320793 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:17.828894 master-0 kubenswrapper[3187]: I1203 14:08:17.826765 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:17.829064 master-0 kubenswrapper[3187]: I1203 14:08:17.828946 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.829064 master-0 kubenswrapper[3187]: I1203 14:08:17.829003 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:17.829064 master-0 kubenswrapper[3187]: I1203 14:08:17.829028 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: I1203 14:08:17.829079 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: E1203 14:08:17.829098 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: E1203 14:08:17.829102 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: I1203 14:08:17.829129 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: E1203 14:08:17.829180 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829151918 +0000 UTC m=+27.795687953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:17.829218 master-0 kubenswrapper[3187]: E1203 14:08:17.829213 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.82919937 +0000 UTC m=+27.795735495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: E1203 14:08:17.829266 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: I1203 14:08:17.829310 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: E1203 14:08:17.829329 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: E1203 14:08:17.829340 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829306033 +0000 UTC m=+27.795841958 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: E1203 14:08:17.829365 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829356944 +0000 UTC m=+27.795892839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: I1203 14:08:17.829404 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: I1203 14:08:17.829455 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: I1203 14:08:17.829486 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:17.829485 master-0 kubenswrapper[3187]: E1203 14:08:17.829409 3187 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829495 3187 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: I1203 14:08:17.829511 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: I1203 14:08:17.829539 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829513 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829577 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829555 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829541869 +0000 UTC m=+27.796077804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829488 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: I1203 14:08:17.829621 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829634 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829659 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829646012 +0000 UTC m=+27.796181907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829671 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829687 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: I1203 14:08:17.829692 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:17.829828 master-0 kubenswrapper[3187]: E1203 14:08:17.829565 3187 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: E1203 14:08:17.829705 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829685773 +0000 UTC m=+27.796221898 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: E1203 14:08:17.829910 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829892469 +0000 UTC m=+27.796428364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: E1203 14:08:17.829932 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.82992209 +0000 UTC m=+27.796458225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: E1203 14:08:17.829954 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829942401 +0000 UTC m=+27.796478636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: E1203 14:08:17.829969 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.829962411 +0000 UTC m=+27.796498516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830026 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830073 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830109 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830143 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830167 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.830214 master-0 kubenswrapper[3187]: I1203 14:08:17.830191 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830240 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830275 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830306 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830334 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830355 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830378 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830402 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830448 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830482 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830503 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830524 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830555 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:17.830574 master-0 kubenswrapper[3187]: I1203 14:08:17.830574 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830597 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830650 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830694 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830720 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830738 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830758 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830779 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: I1203 14:08:17.830800 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.830935 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.830963 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.830969 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.8309602 +0000 UTC m=+27.797496095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831022 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831045 3187 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831038 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831026502 +0000 UTC m=+27.797562407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831083 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831101 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831086393 +0000 UTC m=+27.797622328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831114 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831130 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831117744 +0000 UTC m=+27.797653669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831122 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831154 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831156 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831144315 +0000 UTC m=+27.797680240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831172 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:17.831168 master-0 kubenswrapper[3187]: E1203 14:08:17.831182 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831173686 +0000 UTC m=+27.797709581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831221 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831209877 +0000 UTC m=+27.797745992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831225 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831239 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831230267 +0000 UTC m=+27.797766382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831247 3187 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831261 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831253418 +0000 UTC m=+27.797789553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831278 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831271719 +0000 UTC m=+27.797807814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831283 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831221 3187 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831298 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831286319 +0000 UTC m=+27.797822254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831244 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831324 3187 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.829734 3187 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831372 3187 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831326 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.83131302 +0000 UTC m=+27.797848945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831438 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831411983 +0000 UTC m=+27.797948088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831448 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831456 3187 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831467 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831046 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831502 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831462 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831448624 +0000 UTC m=+27.797984709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831531 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831522586 +0000 UTC m=+27.798058481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831547 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831539936 +0000 UTC m=+27.798075831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831559 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831554427 +0000 UTC m=+27.798090322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831572 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831565777 +0000 UTC m=+27.798101672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831574 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831583 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831578727 +0000 UTC m=+27.798114622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831596 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831624 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831652 3187 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831600 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831592388 +0000 UTC m=+27.798128283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831377 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831747 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831725042 +0000 UTC m=+27.798261117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831379 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831788 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:17.832081 master-0 kubenswrapper[3187]: E1203 14:08:17.831290 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.831795 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831770973 +0000 UTC m=+27.798307108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.831890 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831828805 +0000 UTC m=+27.798364950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.831924 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.831913817 +0000 UTC m=+26.798449752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.831954 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831943278 +0000 UTC m=+27.798479213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.831981 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831968929 +0000 UTC m=+27.798504864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.832006 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.831994389 +0000 UTC m=+27.798530324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:17.833055 master-0 kubenswrapper[3187]: E1203 14:08:17.832032 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.83201856 +0000 UTC m=+27.798554495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:17.833303 master-0 kubenswrapper[3187]: I1203 14:08:17.833273 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.833344 master-0 kubenswrapper[3187]: I1203 14:08:17.833328 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:17.833377 master-0 kubenswrapper[3187]: I1203 14:08:17.833358 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:17.833731 master-0 kubenswrapper[3187]: I1203 14:08:17.833698 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.833779 master-0 kubenswrapper[3187]: I1203 14:08:17.833761 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:17.833813 master-0 kubenswrapper[3187]: I1203 14:08:17.833800 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:17.834044 master-0 kubenswrapper[3187]: I1203 14:08:17.834019 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:17.834206 master-0 kubenswrapper[3187]: I1203 14:08:17.834062 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:17.834206 master-0 kubenswrapper[3187]: I1203 14:08:17.834089 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:17.834206 master-0 kubenswrapper[3187]: I1203 14:08:17.834123 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.834604 master-0 kubenswrapper[3187]: I1203 14:08:17.834288 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.834604 master-0 kubenswrapper[3187]: I1203 14:08:17.834360 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.834604 master-0 kubenswrapper[3187]: I1203 14:08:17.834394 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:17.834604 master-0 kubenswrapper[3187]: I1203 14:08:17.834449 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.834604 master-0 kubenswrapper[3187]: I1203 14:08:17.834606 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.834746 master-0 kubenswrapper[3187]: I1203 14:08:17.834637 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:17.834746 master-0 kubenswrapper[3187]: I1203 14:08:17.834676 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.834923 master-0 kubenswrapper[3187]: I1203 14:08:17.834889 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:17.834957 master-0 kubenswrapper[3187]: I1203 14:08:17.834937 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.835173 master-0 kubenswrapper[3187]: I1203 14:08:17.835145 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.835217 master-0 kubenswrapper[3187]: I1203 14:08:17.835197 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:17.835258 master-0 kubenswrapper[3187]: I1203 14:08:17.835238 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:17.835302 master-0 kubenswrapper[3187]: I1203 14:08:17.835284 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.835511 master-0 kubenswrapper[3187]: I1203 14:08:17.835491 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:17.835571 master-0 kubenswrapper[3187]: I1203 14:08:17.835553 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:17.835627 master-0 kubenswrapper[3187]: I1203 14:08:17.835612 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:17.835821 master-0 kubenswrapper[3187]: I1203 14:08:17.835797 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:17.835872 master-0 kubenswrapper[3187]: I1203 14:08:17.835834 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:17.835908 master-0 kubenswrapper[3187]: I1203 14:08:17.835898 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.836081 master-0 kubenswrapper[3187]: I1203 14:08:17.835935 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:17.836130 master-0 kubenswrapper[3187]: I1203 14:08:17.836112 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.836174 master-0 kubenswrapper[3187]: I1203 14:08:17.836157 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.836234 master-0 kubenswrapper[3187]: I1203 14:08:17.836218 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:17.836406 master-0 kubenswrapper[3187]: I1203 14:08:17.836386 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:17.836485 master-0 kubenswrapper[3187]: I1203 14:08:17.836466 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:17.836522 master-0 kubenswrapper[3187]: I1203 14:08:17.836510 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:17.836677 master-0 kubenswrapper[3187]: I1203 14:08:17.836658 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:17.836766 master-0 kubenswrapper[3187]: I1203 14:08:17.836748 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:17.836801 master-0 kubenswrapper[3187]: I1203 14:08:17.836790 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.836859 master-0 kubenswrapper[3187]: I1203 14:08:17.836843 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.837032 master-0 kubenswrapper[3187]: I1203 14:08:17.837013 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:17.837080 master-0 kubenswrapper[3187]: I1203 14:08:17.837062 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.837116 master-0 kubenswrapper[3187]: I1203 14:08:17.837095 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:17.837259 master-0 kubenswrapper[3187]: I1203 14:08:17.837131 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:17.837313 master-0 kubenswrapper[3187]: I1203 14:08:17.837289 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:17.837352 master-0 kubenswrapper[3187]: I1203 14:08:17.837324 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:17.837381 master-0 kubenswrapper[3187]: I1203 14:08:17.837357 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:17.837679 master-0 kubenswrapper[3187]: I1203 14:08:17.837649 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:17.837720 master-0 kubenswrapper[3187]: I1203 14:08:17.837701 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:17.838023 master-0 kubenswrapper[3187]: I1203 14:08:17.837942 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:17.838023 master-0 kubenswrapper[3187]: I1203 14:08:17.838026 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.838315 master-0 kubenswrapper[3187]: I1203 14:08:17.838074 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.838315 master-0 kubenswrapper[3187]: I1203 14:08:17.838198 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.838315 master-0 kubenswrapper[3187]: I1203 14:08:17.838235 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.838315 master-0 kubenswrapper[3187]: I1203 14:08:17.838275 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:17.838755 master-0 kubenswrapper[3187]: I1203 14:08:17.838332 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:17.838755 master-0 kubenswrapper[3187]: I1203 14:08:17.838507 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:17.838755 master-0 kubenswrapper[3187]: I1203 14:08:17.838537 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.838755 master-0 kubenswrapper[3187]: I1203 14:08:17.838561 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:17.838755 master-0 kubenswrapper[3187]: I1203 14:08:17.838591 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.838933 master-0 kubenswrapper[3187]: I1203 14:08:17.838767 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.838933 master-0 kubenswrapper[3187]: I1203 14:08:17.838806 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:17.839017 master-0 kubenswrapper[3187]: I1203 14:08:17.838848 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.839017 master-0 kubenswrapper[3187]: I1203 14:08:17.838982 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:17.839017 master-0 kubenswrapper[3187]: I1203 14:08:17.839013 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.839120 master-0 kubenswrapper[3187]: I1203 14:08:17.839038 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:17.839165 master-0 kubenswrapper[3187]: I1203 14:08:17.839058 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:17.839210 master-0 kubenswrapper[3187]: I1203 14:08:17.839183 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:17.839247 master-0 kubenswrapper[3187]: I1203 14:08:17.839217 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:17.839283 master-0 kubenswrapper[3187]: I1203 14:08:17.839252 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:17.839493 master-0 kubenswrapper[3187]: I1203 14:08:17.839466 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.839556 master-0 kubenswrapper[3187]: I1203 14:08:17.839511 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:17.839595 master-0 kubenswrapper[3187]: I1203 14:08:17.839569 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:17.839738 master-0 kubenswrapper[3187]: I1203 14:08:17.839715 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:17.839795 master-0 kubenswrapper[3187]: I1203 14:08:17.839748 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:17.839795 master-0 kubenswrapper[3187]: I1203 14:08:17.839779 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:17.839912 master-0 kubenswrapper[3187]: I1203 14:08:17.839810 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:17.840003 master-0 kubenswrapper[3187]: I1203 14:08:17.839982 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:17.840049 master-0 kubenswrapper[3187]: I1203 14:08:17.840028 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.840089 master-0 kubenswrapper[3187]: I1203 14:08:17.840068 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:17.840322 master-0 kubenswrapper[3187]: I1203 14:08:17.840243 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:17.840322 master-0 kubenswrapper[3187]: I1203 14:08:17.840281 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:17.840322 master-0 kubenswrapper[3187]: I1203 14:08:17.840318 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.840463 master-0 kubenswrapper[3187]: I1203 14:08:17.840352 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.840663 master-0 kubenswrapper[3187]: I1203 14:08:17.840627 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.840811 master-0 kubenswrapper[3187]: I1203 14:08:17.840679 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.840811 master-0 kubenswrapper[3187]: I1203 14:08:17.840740 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:17.840956 master-0 kubenswrapper[3187]: I1203 14:08:17.840934 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:17.841012 master-0 kubenswrapper[3187]: I1203 14:08:17.840979 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.841058 master-0 kubenswrapper[3187]: I1203 14:08:17.841038 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:17.841286 master-0 kubenswrapper[3187]: I1203 14:08:17.841094 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.841351 master-0 kubenswrapper[3187]: I1203 14:08:17.841307 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:17.841351 master-0 kubenswrapper[3187]: I1203 14:08:17.841345 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.841453 master-0 kubenswrapper[3187]: I1203 14:08:17.841385 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:17.841629 master-0 kubenswrapper[3187]: I1203 14:08:17.841608 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.841679 master-0 kubenswrapper[3187]: I1203 14:08:17.841659 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:17.841721 master-0 kubenswrapper[3187]: I1203 14:08:17.841698 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:17.841763 master-0 kubenswrapper[3187]: I1203 14:08:17.841737 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:17.841892 master-0 kubenswrapper[3187]: I1203 14:08:17.841874 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:17.841944 master-0 kubenswrapper[3187]: I1203 14:08:17.841916 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:17.841984 master-0 kubenswrapper[3187]: I1203 14:08:17.841947 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:17.842021 master-0 kubenswrapper[3187]: I1203 14:08:17.841984 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.842127 master-0 kubenswrapper[3187]: I1203 14:08:17.842108 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.842190 master-0 kubenswrapper[3187]: I1203 14:08:17.842173 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:17.842237 master-0 kubenswrapper[3187]: I1203 14:08:17.842216 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:17.842392 master-0 kubenswrapper[3187]: I1203 14:08:17.842372 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:17.842470 master-0 kubenswrapper[3187]: I1203 14:08:17.842439 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:17.842516 master-0 kubenswrapper[3187]: I1203 14:08:17.842480 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:17.855770 master-0 kubenswrapper[3187]: E1203 14:08:17.855060 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.855947 master-0 kubenswrapper[3187]: E1203 14:08:17.855786 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:17.855947 master-0 kubenswrapper[3187]: E1203 14:08:17.855805 3187 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.855947 master-0 kubenswrapper[3187]: E1203 14:08:17.855901 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.855875149 +0000 UTC m=+27.822411044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.855967 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.85592917 +0000 UTC m=+26.822465065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.855257 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.856014 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.855795 3187 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: I1203 14:08:17.855221 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.855334 3187 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.856085 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.856104 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.856112 master-0 kubenswrapper[3187]: E1203 14:08:17.855315 3187 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855372 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855451 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855379 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855490 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855546 3187 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855512 3187 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855559 3187 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855618 3187 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855580 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855674 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855716 3187 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.855982 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.856330 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.856034 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856014303 +0000 UTC m=+27.822550198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.856383 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856363513 +0000 UTC m=+27.822899408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.856404 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:17.856451 master-0 kubenswrapper[3187]: E1203 14:08:17.856407 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856396384 +0000 UTC m=+27.822932279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856518 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856533 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856561 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856550678 +0000 UTC m=+27.823086573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856606 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856649 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856689 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856699 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856733 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856714483 +0000 UTC m=+27.823250378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856761 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856767 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856793 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856779505 +0000 UTC m=+27.823315400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856813 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856800805 +0000 UTC m=+27.823336700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856839 3187 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: I1203 14:08:17.856845 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856864 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856857877 +0000 UTC m=+27.823393772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856924 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856948 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856942249 +0000 UTC m=+27.823478144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.856989 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.857006 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.856988281 +0000 UTC m=+27.823524186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.857038 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.857026822 +0000 UTC m=+27.823562717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.857055 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.857063 master-0 kubenswrapper[3187]: E1203 14:08:17.857078 3187 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857095 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857115 3187 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857128 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857142 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857159 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857065 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.857049542 +0000 UTC m=+27.823585437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857194 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857207 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.857188556 +0000 UTC m=+27.823724461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857229 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857245 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.857224417 +0000 UTC m=+27.823760322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857279 3187 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857347 3187 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857396 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857483 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857516 3187 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857610 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857633 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857348 3187 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857171 3187 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857714 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857732 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857838 3187 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857858 3187 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857916 3187 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.857907 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858001 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858039 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858087 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858220 3187 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858283 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858289 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858388 3187 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858440 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858223 3187 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858516 3187 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858538 3187 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858604 3187 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858632 3187 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858724 3187 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858724 3187 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858777 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858944 3187 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.858994 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859007 3187 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859046 3187 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859114 3187 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859132 3187 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859246 3187 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859265 3187 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859266 3187 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859355 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859412 3187 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859470 3187 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859565 3187 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859572 3187 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859632 3187 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859713 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859751 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859819 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.85978992 +0000 UTC m=+27.826325825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:17.860469 master-0 kubenswrapper[3187]: E1203 14:08:17.859819 3187 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.859928 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.859975 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.859953515 +0000 UTC m=+27.826489410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.855755 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860027 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.860001866 +0000 UTC m=+27.826537761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860075 3187 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860101 3187 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860080 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.860067068 +0000 UTC m=+27.826602963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.859978 3187 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860142 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860154 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860191 3187 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860054 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860197 3187 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860285 3187 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860306 3187 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860320 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860354 3187 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860166 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86013224 +0000 UTC m=+27.826668135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860385 3187 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860393 3187 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860460 3187 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860490 3187 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860501 3187 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860527 3187 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860566 3187 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860570 3187 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860664 3187 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860678 3187 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.859578 3187 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860148 3187 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860463 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.860402228 +0000 UTC m=+27.826938123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860749 3187 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860253 3187 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860775 3187 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860792 3187 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860812 3187 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860464 3187 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860855 3187 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860660 3187 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860777 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.860755208 +0000 UTC m=+27.827291103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.857404 3187 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860255 3187 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860974 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.860919643 +0000 UTC m=+27.827455528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.860977 3187 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.861045 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861034206 +0000 UTC m=+27.827570101 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.861125 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861083427 +0000 UTC m=+27.827619332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:17.862968 master-0 kubenswrapper[3187]: E1203 14:08:17.861156 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861140089 +0000 UTC m=+27.827675984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861185 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86116607 +0000 UTC m=+27.827701965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861250 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861235391 +0000 UTC m=+27.827771286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861275 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861264322 +0000 UTC m=+27.827800217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861304 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.361286973 +0000 UTC m=+26.327822868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861333 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861321394 +0000 UTC m=+27.827857299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.861360 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.861345535 +0000 UTC m=+27.827881430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862576 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862558219 +0000 UTC m=+27.829094114 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862596 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.8625875 +0000 UTC m=+27.829123395 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862614 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862606761 +0000 UTC m=+27.829142656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862629 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862621481 +0000 UTC m=+27.829157376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862643 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862635611 +0000 UTC m=+27.829171496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862657 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862649032 +0000 UTC m=+27.829184927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862669 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862663312 +0000 UTC m=+27.829199207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862682 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862674902 +0000 UTC m=+27.829210797 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862695 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862688863 +0000 UTC m=+27.829224758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862710 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862702873 +0000 UTC m=+27.829238768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862723 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862716254 +0000 UTC m=+27.829252149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862736 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862730894 +0000 UTC m=+27.829266789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862749 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862742664 +0000 UTC m=+27.829278559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862763 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862755215 +0000 UTC m=+27.829291110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:17.864877 master-0 kubenswrapper[3187]: E1203 14:08:17.862775 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862769065 +0000 UTC m=+27.829304960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862798 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862781115 +0000 UTC m=+27.829317010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862812 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862804856 +0000 UTC m=+27.829340751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862823 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862818047 +0000 UTC m=+27.829353942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862836 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862829467 +0000 UTC m=+27.829365362 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862849 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862842187 +0000 UTC m=+27.829378082 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862918 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.862875118 +0000 UTC m=+27.829411013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862931 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86292509 +0000 UTC m=+27.829460985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862943 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86293642 +0000 UTC m=+27.829472315 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.862955 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86294909 +0000 UTC m=+27.829484985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863068 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863059323 +0000 UTC m=+27.829595208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863082 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863075024 +0000 UTC m=+27.829610919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863094 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863087494 +0000 UTC m=+27.829623389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863104 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863099475 +0000 UTC m=+27.829635370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863118 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863110825 +0000 UTC m=+27.829646720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863130 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863124245 +0000 UTC m=+27.829660140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863140 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863135016 +0000 UTC m=+27.829670901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863154 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863145766 +0000 UTC m=+27.829681651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863166 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863159736 +0000 UTC m=+27.829695631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863179 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863172037 +0000 UTC m=+27.829707932 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863189 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863184367 +0000 UTC m=+27.829720262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:17.865681 master-0 kubenswrapper[3187]: E1203 14:08:17.863722 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863690931 +0000 UTC m=+27.830226826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863819 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863807585 +0000 UTC m=+27.830343480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863860 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863825775 +0000 UTC m=+27.830361670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863884 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863867966 +0000 UTC m=+27.830404091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863906 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863896437 +0000 UTC m=+27.830432522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863948 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863934888 +0000 UTC m=+27.830470993 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863968 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863957689 +0000 UTC m=+27.830493804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.863984 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.863974659 +0000 UTC m=+27.830510774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.864030 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864019981 +0000 UTC m=+27.830555866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.864044 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864036581 +0000 UTC m=+27.830572476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.864067 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864059002 +0000 UTC m=+27.830594887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.864081 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864073772 +0000 UTC m=+27.830609667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.864097 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864091133 +0000 UTC m=+27.830627028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865052 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.864105253 +0000 UTC m=+27.830641148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865083 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865075191 +0000 UTC m=+27.831611076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865099 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865090571 +0000 UTC m=+27.831626466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865164 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865133862 +0000 UTC m=+27.831669757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865180 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865172264 +0000 UTC m=+27.831708159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865196 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865188854 +0000 UTC m=+27.831724749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865232 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865222785 +0000 UTC m=+27.831758680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865246 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865238585 +0000 UTC m=+27.831774470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865263 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865254336 +0000 UTC m=+27.831790231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:17.866761 master-0 kubenswrapper[3187]: E1203 14:08:17.865273 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865268076 +0000 UTC m=+27.831803971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865286 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865279527 +0000 UTC m=+27.831815422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865299 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865292297 +0000 UTC m=+27.831828192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865313 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865305937 +0000 UTC m=+27.831841822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865327 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865320488 +0000 UTC m=+27.831856383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865341 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865333218 +0000 UTC m=+27.831869113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865355 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865346829 +0000 UTC m=+27.831882724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865368 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865360839 +0000 UTC m=+27.831896724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865378 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865373439 +0000 UTC m=+27.831909334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865391 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86538425 +0000 UTC m=+27.831920145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865404 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86539762 +0000 UTC m=+27.831933515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865449 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86541062 +0000 UTC m=+27.831946515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865464 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865457102 +0000 UTC m=+27.831992997 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865483 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865474672 +0000 UTC m=+27.832010567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.865496 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.865489463 +0000 UTC m=+27.832025358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866082 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866069429 +0000 UTC m=+27.832605324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866154 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86609124 +0000 UTC m=+27.832627135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866178 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866171042 +0000 UTC m=+27.832706937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866190 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866184732 +0000 UTC m=+27.832720627 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866307 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866298926 +0000 UTC m=+27.832834821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866323 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866316226 +0000 UTC m=+27.832852121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:17.868110 master-0 kubenswrapper[3187]: E1203 14:08:17.866468 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.86643655 +0000 UTC m=+27.832972605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:17.869169 master-0 kubenswrapper[3187]: E1203 14:08:17.866501 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866491401 +0000 UTC m=+27.833027296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:17.869169 master-0 kubenswrapper[3187]: E1203 14:08:17.866520 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866511952 +0000 UTC m=+27.833047847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:17.869169 master-0 kubenswrapper[3187]: E1203 14:08:17.866534 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866527042 +0000 UTC m=+27.833062937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:17.869169 master-0 kubenswrapper[3187]: E1203 14:08:17.866548 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866540993 +0000 UTC m=+27.833076888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:17.869169 master-0 kubenswrapper[3187]: E1203 14:08:17.866562 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.866553863 +0000 UTC m=+27.833089748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:17.875653 master-0 kubenswrapper[3187]: E1203 14:08:17.875556 3187 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.875653 master-0 kubenswrapper[3187]: E1203 14:08:17.875597 3187 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.875653 master-0 kubenswrapper[3187]: E1203 14:08:17.875614 3187 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.875789 master-0 kubenswrapper[3187]: E1203 14:08:17.875675 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.375664792 +0000 UTC m=+26.342200677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.882054 master-0 kubenswrapper[3187]: I1203 14:08:17.882001 3187 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:17.901526 master-0 kubenswrapper[3187]: E1203 14:08:17.901435 3187 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:17.901526 master-0 kubenswrapper[3187]: E1203 14:08:17.901495 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:17.901912 master-0 kubenswrapper[3187]: E1203 14:08:17.901588 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.401562699 +0000 UTC m=+26.368098594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:17.914938 master-0 kubenswrapper[3187]: I1203 14:08:17.914856 3187 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:08:17.920442 master-0 kubenswrapper[3187]: I1203 14:08:17.920370 3187 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:17.921726 master-0 kubenswrapper[3187]: I1203 14:08:17.921626 3187 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:08:17.953214 master-0 kubenswrapper[3187]: I1203 14:08:17.953132 3187 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:08:17.958459 master-0 kubenswrapper[3187]: I1203 14:08:17.958364 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:17.958668 master-0 kubenswrapper[3187]: I1203 14:08:17.958564 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:17.958668 master-0 kubenswrapper[3187]: E1203 14:08:17.958638 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.958762 master-0 kubenswrapper[3187]: E1203 14:08:17.958673 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.958762 master-0 kubenswrapper[3187]: E1203 14:08:17.958688 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.958762 master-0 kubenswrapper[3187]: E1203 14:08:17.958751 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.958731996 +0000 UTC m=+27.925268091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.958889 master-0 kubenswrapper[3187]: E1203 14:08:17.958852 3187 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:17.958889 master-0 kubenswrapper[3187]: E1203 14:08:17.958876 3187 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.958943 master-0 kubenswrapper[3187]: E1203 14:08:17.958890 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.959017 master-0 kubenswrapper[3187]: E1203 14:08:17.958982 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.958959943 +0000 UTC m=+27.925496048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.959952 master-0 kubenswrapper[3187]: I1203 14:08:17.959893 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:17.959952 master-0 kubenswrapper[3187]: I1203 14:08:17.959950 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: E1203 14:08:17.959997 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: E1203 14:08:17.960018 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: E1203 14:08:17.960030 3187 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: I1203 14:08:17.960061 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: E1203 14:08:17.960070 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.960059924 +0000 UTC m=+26.926595969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: I1203 14:08:17.960157 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:17.960249 master-0 kubenswrapper[3187]: E1203 14:08:17.960220 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960263 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960277 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960294 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960335 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960354 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.960334892 +0000 UTC m=+26.926870787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960353 3187 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960461 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.960438625 +0000 UTC m=+26.926974520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960502 3187 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.960547 master-0 kubenswrapper[3187]: E1203 14:08:17.960563 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:17.960925 master-0 kubenswrapper[3187]: E1203 14:08:17.960640 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:18.96061967 +0000 UTC m=+26.927155575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.038698 master-0 kubenswrapper[3187]: W1203 14:08:18.038572 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c95e54_b4ba_4b19_a97c_abcec840ac5d.slice/crio-8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90 WatchSource:0}: Error finding container 8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90: Status 404 returned error can't find the container with id 8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90 Dec 03 14:08:18.039132 master-0 kubenswrapper[3187]: W1203 14:08:18.039069 3187 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec89938d_35a5_46ba_8c63_12489db18cbd.slice/crio-0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9 WatchSource:0}: Error finding container 0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9: Status 404 returned error can't find the container with id 0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9 Dec 03 14:08:18.062519 master-0 kubenswrapper[3187]: I1203 14:08:18.062464 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:18.062676 master-0 kubenswrapper[3187]: I1203 14:08:18.062609 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:18.062898 master-0 kubenswrapper[3187]: E1203 14:08:18.062848 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:18.062898 master-0 kubenswrapper[3187]: E1203 14:08:18.062893 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.062985 master-0 kubenswrapper[3187]: E1203 14:08:18.062905 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.063063 master-0 kubenswrapper[3187]: E1203 14:08:18.063046 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.063028585 +0000 UTC m=+27.029564480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.063222 master-0 kubenswrapper[3187]: E1203 14:08:18.063169 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:18.063222 master-0 kubenswrapper[3187]: E1203 14:08:18.063217 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.063310 master-0 kubenswrapper[3187]: E1203 14:08:18.063232 3187 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.063377 master-0 kubenswrapper[3187]: E1203 14:08:18.063331 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.063305823 +0000 UTC m=+27.029841718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.166330 master-0 kubenswrapper[3187]: I1203 14:08:18.166192 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:18.167688 master-0 kubenswrapper[3187]: I1203 14:08:18.167662 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:18.167779 master-0 kubenswrapper[3187]: I1203 14:08:18.167702 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:18.167779 master-0 kubenswrapper[3187]: I1203 14:08:18.167729 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:18.167867 master-0 kubenswrapper[3187]: E1203 14:08:18.167814 3187 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:18.167867 master-0 kubenswrapper[3187]: E1203 14:08:18.167837 3187 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.167867 master-0 kubenswrapper[3187]: E1203 14:08:18.167852 3187 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.167974 master-0 kubenswrapper[3187]: E1203 14:08:18.167910 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.16789213 +0000 UTC m=+27.134428025 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168031 master-0 kubenswrapper[3187]: I1203 14:08:18.168008 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:18.168146 master-0 kubenswrapper[3187]: E1203 14:08:18.168123 3187 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:18.168195 master-0 kubenswrapper[3187]: E1203 14:08:18.168147 3187 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.168195 master-0 kubenswrapper[3187]: E1203 14:08:18.168158 3187 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168197 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.168183888 +0000 UTC m=+27.134719783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168200 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168219 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168233 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168247 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168262 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.168279 master-0 kubenswrapper[3187]: E1203 14:08:18.168272 3187 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168535 master-0 kubenswrapper[3187]: E1203 14:08:18.168339 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.168326542 +0000 UTC m=+27.134862437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168594 master-0 kubenswrapper[3187]: E1203 14:08:18.168544 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.168530938 +0000 UTC m=+27.135066833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.168904 master-0 kubenswrapper[3187]: I1203 14:08:18.168872 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:18.169062 master-0 kubenswrapper[3187]: I1203 14:08:18.169040 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:18.169094 master-0 kubenswrapper[3187]: I1203 14:08:18.169077 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:18.169131 master-0 kubenswrapper[3187]: I1203 14:08:18.169123 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:18.169161 master-0 kubenswrapper[3187]: E1203 14:08:18.169131 3187 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169191 master-0 kubenswrapper[3187]: E1203 14:08:18.169167 3187 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.169191 master-0 kubenswrapper[3187]: E1203 14:08:18.169167 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169191 master-0 kubenswrapper[3187]: E1203 14:08:18.169182 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169191 master-0 kubenswrapper[3187]: E1203 14:08:18.169188 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169198 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169249 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.169230688 +0000 UTC m=+28.135766583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169266 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.169261079 +0000 UTC m=+28.135796974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169274 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169286 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169295 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169294 master-0 kubenswrapper[3187]: E1203 14:08:18.169294 3187 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169312 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169323 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.16931075 +0000 UTC m=+28.135846645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169344 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.169335051 +0000 UTC m=+28.135871156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: I1203 14:08:18.169308 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169355 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169383 3187 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169391 3187 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: I1203 14:08:18.169455 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169490 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.169482695 +0000 UTC m=+28.136018590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169537 3187 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169552 3187 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.169546 master-0 kubenswrapper[3187]: E1203 14:08:18.169561 3187 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.169899 master-0 kubenswrapper[3187]: E1203 14:08:18.169627 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.169617239 +0000 UTC m=+28.136153364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.273600 master-0 kubenswrapper[3187]: I1203 14:08:18.273554 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:18.273702 master-0 kubenswrapper[3187]: I1203 14:08:18.273685 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:18.273746 master-0 kubenswrapper[3187]: I1203 14:08:18.273730 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:18.273789 master-0 kubenswrapper[3187]: I1203 14:08:18.273775 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:18.274431 master-0 kubenswrapper[3187]: I1203 14:08:18.274389 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:18.274756 master-0 kubenswrapper[3187]: I1203 14:08:18.274731 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:18.274802 master-0 kubenswrapper[3187]: I1203 14:08:18.274776 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:18.274885 master-0 kubenswrapper[3187]: I1203 14:08:18.274862 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:18.275562 master-0 kubenswrapper[3187]: E1203 14:08:18.275540 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:18.275604 master-0 kubenswrapper[3187]: E1203 14:08:18.275564 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.275604 master-0 kubenswrapper[3187]: E1203 14:08:18.275575 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275672 master-0 kubenswrapper[3187]: E1203 14:08:18.275618 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.275602385 +0000 UTC m=+28.242138270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275709 master-0 kubenswrapper[3187]: E1203 14:08:18.275677 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:18.275709 master-0 kubenswrapper[3187]: E1203 14:08:18.275687 3187 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.275709 master-0 kubenswrapper[3187]: E1203 14:08:18.275694 3187 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275822 master-0 kubenswrapper[3187]: E1203 14:08:18.275747 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.275709428 +0000 UTC m=+27.242245323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275822 master-0 kubenswrapper[3187]: E1203 14:08:18.275791 3187 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.275822 master-0 kubenswrapper[3187]: E1203 14:08:18.275799 3187 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.275822 master-0 kubenswrapper[3187]: E1203 14:08:18.275806 3187 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275825 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.275820052 +0000 UTC m=+27.242355947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275862 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275869 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275875 3187 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275893 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.275887993 +0000 UTC m=+27.242423888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275929 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275937 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275943 3187 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.275969 master-0 kubenswrapper[3187]: E1203 14:08:18.275961 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.275955915 +0000 UTC m=+27.242491810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276001 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276009 3187 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276015 3187 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276032 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.276027187 +0000 UTC m=+28.242563072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276070 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276078 3187 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276084 3187 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276102 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.276097359 +0000 UTC m=+28.242633254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276136 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276145 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276150 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.276271 master-0 kubenswrapper[3187]: E1203 14:08:18.276168 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.276162031 +0000 UTC m=+28.242697926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.316755 master-0 kubenswrapper[3187]: I1203 14:08:18.316116 3187 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:18.377497 master-0 kubenswrapper[3187]: I1203 14:08:18.377400 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377578 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377597 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377610 3187 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: I1203 14:08:18.377620 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: I1203 14:08:18.377662 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377691 3187 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377701 3187 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377708 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: I1203 14:08:18.377701 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377742 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.377728222 +0000 UTC m=+27.344264117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.377806 master-0 kubenswrapper[3187]: E1203 14:08:18.377777 3187 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377847 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377858 3187 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377865 3187 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: I1203 14:08:18.377807 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377850 3187 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377894 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377912 3187 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377933 3187 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377942 3187 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377828 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.377815715 +0000 UTC m=+27.344351610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: E1203 14:08:18.377982 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.377955919 +0000 UTC m=+27.344491814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378130 master-0 kubenswrapper[3187]: I1203 14:08:18.378047 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378165 3187 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378182 3187 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378190 3187 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378217 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.378206506 +0000 UTC m=+27.344742401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378235 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.378228956 +0000 UTC m=+27.344764851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378498 master-0 kubenswrapper[3187]: E1203 14:08:18.378356 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.37834845 +0000 UTC m=+27.344884345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378748 master-0 kubenswrapper[3187]: I1203 14:08:18.378692 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:18.378748 master-0 kubenswrapper[3187]: I1203 14:08:18.378723 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:18.378860 master-0 kubenswrapper[3187]: E1203 14:08:18.378832 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378860 master-0 kubenswrapper[3187]: E1203 14:08:18.378853 3187 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378860 master-0 kubenswrapper[3187]: I1203 14:08:18.378856 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:18.378860 master-0 kubenswrapper[3187]: E1203 14:08:18.378862 3187 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378970 master-0 kubenswrapper[3187]: I1203 14:08:18.378893 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:18.378970 master-0 kubenswrapper[3187]: E1203 14:08:18.378920 3187 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:18.378970 master-0 kubenswrapper[3187]: E1203 14:08:18.378932 3187 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.378970 master-0 kubenswrapper[3187]: E1203 14:08:18.378941 3187 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.378970 master-0 kubenswrapper[3187]: E1203 14:08:18.378969 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.378959457 +0000 UTC m=+28.345495352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.379104 master-0 kubenswrapper[3187]: I1203 14:08:18.378990 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:18.379104 master-0 kubenswrapper[3187]: E1203 14:08:18.379002 3187 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.379104 master-0 kubenswrapper[3187]: E1203 14:08:18.379012 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.379195 master-0 kubenswrapper[3187]: E1203 14:08:18.379117 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:18.379195 master-0 kubenswrapper[3187]: E1203 14:08:18.379129 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.379195 master-0 kubenswrapper[3187]: E1203 14:08:18.379136 3187 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.379195 master-0 kubenswrapper[3187]: E1203 14:08:18.379189 3187 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:18.379195 master-0 kubenswrapper[3187]: E1203 14:08:18.379196 3187 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.379328 master-0 kubenswrapper[3187]: E1203 14:08:18.379205 3187 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.379328 master-0 kubenswrapper[3187]: E1203 14:08:18.379227 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.379218614 +0000 UTC m=+28.345754509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.379328 master-0 kubenswrapper[3187]: E1203 14:08:18.379240 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.379234365 +0000 UTC m=+28.345770260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.379328 master-0 kubenswrapper[3187]: E1203 14:08:18.379249 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.379244505 +0000 UTC m=+28.345780400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.379328 master-0 kubenswrapper[3187]: E1203 14:08:18.379326 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.379321257 +0000 UTC m=+28.345857152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.427082 master-0 kubenswrapper[3187]: I1203 14:08:18.426923 3187 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:18.427082 master-0 kubenswrapper[3187]: [-]has-synced failed: reason withheld Dec 03 14:08:18.427082 master-0 kubenswrapper[3187]: [+]process-running ok Dec 03 14:08:18.427082 master-0 kubenswrapper[3187]: healthz check failed Dec 03 14:08:18.427082 master-0 kubenswrapper[3187]: I1203 14:08:18.427028 3187 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441293 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.441501 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441607 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.441684 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441747 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441772 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.441803 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.441857 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441875 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441890 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.441959 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441971 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.441994 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.442012 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: I1203 14:08:18.442032 3187 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.442085 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.442215 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.442315 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.442377 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:18.442576 master-0 kubenswrapper[3187]: E1203 14:08:18.442449 3187 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:18.482628 master-0 kubenswrapper[3187]: I1203 14:08:18.482556 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:18.482628 master-0 kubenswrapper[3187]: I1203 14:08:18.482604 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:18.482628 master-0 kubenswrapper[3187]: I1203 14:08:18.482644 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:18.482902 master-0 kubenswrapper[3187]: I1203 14:08:18.482698 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:18.482902 master-0 kubenswrapper[3187]: I1203 14:08:18.482748 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:18.482902 master-0 kubenswrapper[3187]: I1203 14:08:18.482767 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:18.483164 master-0 kubenswrapper[3187]: E1203 14:08:18.483135 3187 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483164 master-0 kubenswrapper[3187]: E1203 14:08:18.483162 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483227 master-0 kubenswrapper[3187]: E1203 14:08:18.483206 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:19.483191314 +0000 UTC m=+27.449727209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483270 master-0 kubenswrapper[3187]: E1203 14:08:18.483256 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483270 master-0 kubenswrapper[3187]: E1203 14:08:18.483266 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.483328 master-0 kubenswrapper[3187]: E1203 14:08:18.483275 3187 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483328 master-0 kubenswrapper[3187]: E1203 14:08:18.483298 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.483290776 +0000 UTC m=+28.449826681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483386 master-0 kubenswrapper[3187]: E1203 14:08:18.483339 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483386 master-0 kubenswrapper[3187]: E1203 14:08:18.483349 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.483386 master-0 kubenswrapper[3187]: E1203 14:08:18.483356 3187 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483386 master-0 kubenswrapper[3187]: E1203 14:08:18.483374 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.483368859 +0000 UTC m=+28.449904754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483429 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483439 3187 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483446 3187 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483467 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.483461341 +0000 UTC m=+28.449997226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483513 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483523 3187 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.483536 master-0 kubenswrapper[3187]: E1203 14:08:18.483529 3187 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483777 master-0 kubenswrapper[3187]: E1203 14:08:18.483549 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.483543554 +0000 UTC m=+28.450079439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483777 master-0 kubenswrapper[3187]: E1203 14:08:18.483592 3187 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.483777 master-0 kubenswrapper[3187]: E1203 14:08:18.483601 3187 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.483777 master-0 kubenswrapper[3187]: E1203 14:08:18.483608 3187 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.483777 master-0 kubenswrapper[3187]: E1203 14:08:18.483627 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.483620806 +0000 UTC m=+28.450156701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.610887 master-0 kubenswrapper[3187]: I1203 14:08:18.610519 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9"} Dec 03 14:08:18.612124 master-0 kubenswrapper[3187]: I1203 14:08:18.612087 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90"} Dec 03 14:08:18.613402 master-0 kubenswrapper[3187]: I1203 14:08:18.613350 3187 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139"} Dec 03 14:08:18.692876 master-0 kubenswrapper[3187]: I1203 14:08:18.692666 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:18.692876 master-0 kubenswrapper[3187]: E1203 14:08:18.692854 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.692905 3187 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.692921 3187 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: I1203 14:08:18.692966 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.692991 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.692965613 +0000 UTC m=+28.659501508 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.693166 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.693192 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.693214 master-0 kubenswrapper[3187]: E1203 14:08:18.693210 3187 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: I1203 14:08:18.693251 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693279 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.693256211 +0000 UTC m=+28.659792116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693340 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693359 3187 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693372 3187 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693402 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.693393765 +0000 UTC m=+28.659929850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: I1203 14:08:18.693354 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693450 3187 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693469 3187 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693479 3187 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.693498 master-0 kubenswrapper[3187]: E1203 14:08:18.693512 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.693501728 +0000 UTC m=+28.660037633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.903288 master-0 kubenswrapper[3187]: I1203 14:08:18.903187 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:18.903288 master-0 kubenswrapper[3187]: I1203 14:08:18.903281 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903392 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903415 3187 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903447 3187 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903490 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903511 3187 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903522 3187 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.903722 master-0 kubenswrapper[3187]: E1203 14:08:18.903590 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.903572448 +0000 UTC m=+28.870108343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:18.903987 master-0 kubenswrapper[3187]: E1203 14:08:18.903843 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:20.903751213 +0000 UTC m=+28.870287148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.006955 master-0 kubenswrapper[3187]: I1203 14:08:19.006878 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:19.006955 master-0 kubenswrapper[3187]: I1203 14:08:19.006958 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: I1203 14:08:19.007088 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007144 3187 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007175 3187 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007192 3187 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: I1203 14:08:19.007237 3187 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007271 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.007246568 +0000 UTC m=+28.973782463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007271 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007316 3187 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007332 3187 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007331 3187 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007353 3187 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007403 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.007381072 +0000 UTC m=+28.973916967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.007479 master-0 kubenswrapper[3187]: E1203 14:08:19.007443 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.007433044 +0000 UTC m=+28.973968939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007893 master-0 kubenswrapper[3187]: E1203 14:08:19.007463 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:19.007893 master-0 kubenswrapper[3187]: E1203 14:08:19.007567 3187 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:19.007893 master-0 kubenswrapper[3187]: E1203 14:08:19.007634 3187 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.007893 master-0 kubenswrapper[3187]: E1203 14:08:19.007847 3187 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.007815935 +0000 UTC m=+28.974351880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:19.028973 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:08:19.054163 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:08:19.054443 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:08:19.054869 master-0 systemd[1]: kubelet.service: Consumed 3.424s CPU time. Dec 03 14:08:19.071518 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:08:19.186923 master-0 kubenswrapper[4387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.186923 master-0 kubenswrapper[4387]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:08:19.186923 master-0 kubenswrapper[4387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.186923 master-0 kubenswrapper[4387]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.188353 master-0 kubenswrapper[4387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:08:19.188353 master-0 kubenswrapper[4387]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.188353 master-0 kubenswrapper[4387]: I1203 14:08:19.187034 4387 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189609 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189634 4387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189641 4387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189646 4387 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189651 4387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189657 4387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189663 4387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.189652 master-0 kubenswrapper[4387]: W1203 14:08:19.189669 4387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189675 4387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189689 4387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189695 4387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189701 4387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189705 4387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189710 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189714 4387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189719 4387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189723 4387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189730 4387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189735 4387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189739 4387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189743 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189747 4387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189751 4387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189755 4387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189758 4387 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189763 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.189947 master-0 kubenswrapper[4387]: W1203 14:08:19.189767 4387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189772 4387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189775 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189779 4387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189783 4387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189786 4387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189790 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189794 4387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189799 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189803 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189806 4387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189811 4387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189816 4387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189820 4387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189824 4387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189829 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189833 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189838 4387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189842 4387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.190732 master-0 kubenswrapper[4387]: W1203 14:08:19.189846 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189850 4387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189856 4387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189860 4387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189864 4387 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189868 4387 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189873 4387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189878 4387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189881 4387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189885 4387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189889 4387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189893 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189896 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189901 4387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189905 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189908 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189912 4387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189915 4387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189919 4387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189923 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.191647 master-0 kubenswrapper[4387]: W1203 14:08:19.189926 4387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189930 4387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189933 4387 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189937 4387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189940 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189944 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: W1203 14:08:19.189947 4387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190055 4387 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190068 4387 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190084 4387 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190093 4387 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190101 4387 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190107 4387 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190115 4387 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190122 4387 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190128 4387 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190134 4387 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190140 4387 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190146 4387 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190151 4387 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190156 4387 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190161 4387 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190166 4387 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:08:19.192446 master-0 kubenswrapper[4387]: I1203 14:08:19.190172 4387 flags.go:64] FLAG: --cloud-config="" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190176 4387 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190180 4387 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190187 4387 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190192 4387 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190196 4387 flags.go:64] FLAG: --config-dir="" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190201 4387 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190206 4387 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190213 4387 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190217 4387 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190222 4387 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190226 4387 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190231 4387 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190235 4387 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190239 4387 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190243 4387 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190247 4387 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190254 4387 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190259 4387 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190263 4387 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190267 4387 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190271 4387 flags.go:64] FLAG: --enable-server="true" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190278 4387 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190285 4387 flags.go:64] FLAG: --event-burst="100" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190289 4387 flags.go:64] FLAG: --event-qps="50" Dec 03 14:08:19.193596 master-0 kubenswrapper[4387]: I1203 14:08:19.190294 4387 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190298 4387 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190303 4387 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190309 4387 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190313 4387 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190318 4387 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190322 4387 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190327 4387 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190332 4387 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190337 4387 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190342 4387 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190347 4387 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190352 4387 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190357 4387 flags.go:64] FLAG: --feature-gates="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190364 4387 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190369 4387 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190374 4387 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190381 4387 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190387 4387 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190393 4387 flags.go:64] FLAG: --help="false" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190397 4387 flags.go:64] FLAG: --hostname-override="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190402 4387 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190407 4387 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190412 4387 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190436 4387 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:08:19.194435 master-0 kubenswrapper[4387]: I1203 14:08:19.190442 4387 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190448 4387 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190453 4387 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190459 4387 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190466 4387 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190471 4387 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190477 4387 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190483 4387 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190488 4387 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190494 4387 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190499 4387 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190504 4387 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190508 4387 flags.go:64] FLAG: --lock-file="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190513 4387 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190518 4387 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190522 4387 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190530 4387 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190534 4387 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190538 4387 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190543 4387 flags.go:64] FLAG: --logging-format="text" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190547 4387 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190552 4387 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190557 4387 flags.go:64] FLAG: --manifest-url="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190561 4387 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190566 4387 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:08:19.195532 master-0 kubenswrapper[4387]: I1203 14:08:19.190571 4387 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190576 4387 flags.go:64] FLAG: --max-pods="110" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190581 4387 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190586 4387 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190590 4387 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190594 4387 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190599 4387 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190604 4387 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190609 4387 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190622 4387 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190626 4387 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190631 4387 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190637 4387 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190643 4387 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190652 4387 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190657 4387 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190662 4387 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190667 4387 flags.go:64] FLAG: --port="10250" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190672 4387 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190677 4387 flags.go:64] FLAG: --provider-id="" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190682 4387 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190687 4387 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190691 4387 flags.go:64] FLAG: --register-node="true" Dec 03 14:08:19.196152 master-0 kubenswrapper[4387]: I1203 14:08:19.190696 4387 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190701 4387 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190710 4387 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190716 4387 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190721 4387 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190727 4387 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190736 4387 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190743 4387 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190749 4387 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190755 4387 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190760 4387 flags.go:64] FLAG: --runonce="false" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190766 4387 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190771 4387 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190775 4387 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190780 4387 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190784 4387 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190791 4387 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190795 4387 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190800 4387 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190804 4387 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190809 4387 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190813 4387 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190818 4387 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190823 4387 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190827 4387 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:08:19.199631 master-0 kubenswrapper[4387]: I1203 14:08:19.190831 4387 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190838 4387 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190842 4387 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190846 4387 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190853 4387 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190863 4387 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190868 4387 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190872 4387 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190877 4387 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190881 4387 flags.go:64] FLAG: --v="2" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190888 4387 flags.go:64] FLAG: --version="false" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190895 4387 flags.go:64] FLAG: --vmodule="" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190901 4387 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: I1203 14:08:19.190909 4387 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191025 4387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191031 4387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191035 4387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191040 4387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191044 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191048 4387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191053 4387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191057 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191061 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.203146 master-0 kubenswrapper[4387]: W1203 14:08:19.191066 4387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191070 4387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191075 4387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191079 4387 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191083 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191088 4387 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191092 4387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191095 4387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191100 4387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191105 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191109 4387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191114 4387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191117 4387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191121 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191125 4387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191129 4387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191133 4387 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191137 4387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191142 4387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191146 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.203913 master-0 kubenswrapper[4387]: W1203 14:08:19.191150 4387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191154 4387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191160 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191164 4387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191168 4387 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191172 4387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191176 4387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191179 4387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191183 4387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191187 4387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191222 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191226 4387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191234 4387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191238 4387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191242 4387 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191246 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191250 4387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191254 4387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191259 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.204409 master-0 kubenswrapper[4387]: W1203 14:08:19.191262 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191267 4387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191270 4387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191275 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191279 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191283 4387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191286 4387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191290 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191294 4387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191297 4387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191301 4387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191305 4387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191308 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191312 4387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191317 4387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191328 4387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191333 4387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191337 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191341 4387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.205035 master-0 kubenswrapper[4387]: W1203 14:08:19.191346 4387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.191350 4387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.191354 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.191357 4387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.191361 4387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: I1203 14:08:19.191367 4387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: I1203 14:08:19.201073 4387 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: I1203 14:08:19.201104 4387 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201205 4387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201216 4387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201222 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201229 4387 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201235 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201239 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201244 4387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.205570 master-0 kubenswrapper[4387]: W1203 14:08:19.201249 4387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201253 4387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201259 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201263 4387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201268 4387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201272 4387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201277 4387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201281 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201285 4387 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201291 4387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201297 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201302 4387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201307 4387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201312 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201316 4387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201321 4387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201325 4387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201330 4387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201334 4387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201338 4387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.205950 master-0 kubenswrapper[4387]: W1203 14:08:19.201343 4387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201347 4387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201354 4387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201362 4387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201370 4387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201376 4387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201381 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201386 4387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201390 4387 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201394 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201399 4387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201403 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201409 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201435 4387 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201440 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201445 4387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201450 4387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201454 4387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201459 4387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201466 4387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.207409 master-0 kubenswrapper[4387]: W1203 14:08:19.201473 4387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201478 4387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201483 4387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201487 4387 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201493 4387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201497 4387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201503 4387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201508 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201514 4387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201519 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201523 4387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201528 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201533 4387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201538 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201542 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201547 4387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201553 4387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201557 4387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201562 4387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.208749 master-0 kubenswrapper[4387]: W1203 14:08:19.201567 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.201571 4387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.201575 4387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.201580 4387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.201584 4387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.201589 4387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: I1203 14:08:19.201597 4387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205261 4387 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205300 4387 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205313 4387 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205319 4387 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205324 4387 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205332 4387 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205340 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205345 4387 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.209268 master-0 kubenswrapper[4387]: W1203 14:08:19.205350 4387 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205354 4387 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205359 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205364 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205368 4387 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205372 4387 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205377 4387 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205381 4387 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205386 4387 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205390 4387 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205394 4387 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205399 4387 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205404 4387 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205410 4387 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205432 4387 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205437 4387 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205442 4387 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205449 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205454 4387 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205458 4387 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.209700 master-0 kubenswrapper[4387]: W1203 14:08:19.205464 4387 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205470 4387 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205475 4387 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205480 4387 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205486 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205492 4387 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205499 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205505 4387 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205510 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205515 4387 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205533 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205540 4387 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205547 4387 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205553 4387 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205559 4387 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205564 4387 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205569 4387 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205575 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205582 4387 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205586 4387 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.210195 master-0 kubenswrapper[4387]: W1203 14:08:19.205698 4387 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205708 4387 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205712 4387 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205717 4387 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205722 4387 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205727 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205732 4387 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205737 4387 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205741 4387 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205746 4387 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205751 4387 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205755 4387 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205760 4387 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205764 4387 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205769 4387 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205773 4387 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205779 4387 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205783 4387 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205791 4387 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.210721 master-0 kubenswrapper[4387]: W1203 14:08:19.205797 4387 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: W1203 14:08:19.205802 4387 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: W1203 14:08:19.205806 4387 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: W1203 14:08:19.205811 4387 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: W1203 14:08:19.205815 4387 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: I1203 14:08:19.205827 4387 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: I1203 14:08:19.206141 4387 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: I1203 14:08:19.210721 4387 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:08:19.211208 master-0 kubenswrapper[4387]: I1203 14:08:19.210854 4387 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:08:19.211708 master-0 kubenswrapper[4387]: I1203 14:08:19.211280 4387 server.go:997] "Starting client certificate rotation" Dec 03 14:08:19.211708 master-0 kubenswrapper[4387]: I1203 14:08:19.211298 4387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:08:19.211865 master-0 kubenswrapper[4387]: I1203 14:08:19.211639 4387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 10:20:11.429611522 +0000 UTC Dec 03 14:08:19.211865 master-0 kubenswrapper[4387]: I1203 14:08:19.211757 4387 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h11m52.217858813s for next certificate rotation Dec 03 14:08:19.213917 master-0 kubenswrapper[4387]: I1203 14:08:19.213878 4387 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:08:19.216329 master-0 kubenswrapper[4387]: I1203 14:08:19.216132 4387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:08:19.221562 master-0 kubenswrapper[4387]: I1203 14:08:19.221526 4387 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:08:19.228127 master-0 kubenswrapper[4387]: I1203 14:08:19.228043 4387 log.go:25] "Validated CRI v1 image API" Dec 03 14:08:19.229935 master-0 kubenswrapper[4387]: I1203 14:08:19.229804 4387 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:08:19.239235 master-0 kubenswrapper[4387]: I1203 14:08:19.239138 4387 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:08:19.239877 master-0 kubenswrapper[4387]: I1203 14:08:19.239197 4387 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm major:0 minor:353 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm major:0 minor:217 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm major:0 minor:221 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm major:0 minor:326 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm major:0 minor:322 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm major:0 minor:362 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm major:0 minor:387 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm major:0 minor:329 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm major:0 minor:68 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm major:0 minor:62 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9:{mountpoint:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp major:0 minor:198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf:{mountpoint:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf major:0 minor:315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl:{mountpoint:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j major:0 minor:314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv:{mountpoint:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv major:0 minor:311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2:{mountpoint:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 major:0 minor:359 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:351 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4:{mountpoint:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 major:0 minor:319 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg:{mountpoint:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg major:0 minor:352 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q:{mountpoint:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q major:0 minor:371 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls major:0 minor:350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn:{mountpoint:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn major:0 minor:361 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:348 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:195 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj:{mountpoint:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj major:0 minor:382 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:360 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx major:0 minor:327 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs major:0 minor:196 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7:{mountpoint:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7 major:0 minor:349 fsType:tmpfs blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/06efd20e5e2a9b7e172efeca39d2dea7ac968e5a88d0e1d95e55c8fcbbcf94c1/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/be05bff67b44502e94e065df53bd887d4cc895b98b360cf29ea7ea7ed16edef5/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/15a92ce52afbb72f98fb4ef195bb8a01eec6b31443e3e1eccd5d4ac77becdea5/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/7cc39c503d868e009fa7329d4314b52878f872c4c7aee87a28c4b95d14783061/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/f0879cef70295ecbf90aa73204b488d707aa0573b4ee954fc0ec3bf17cecacfd/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/a8551e2fe6db269b264ddbce879fb852cbd5221efe431f55f4a51d2123efc516/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/ccf7b687ce6e8526d5911a4c547202d3b89bfd09b77166ab6b13962f29bb9277/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/ad21089139acfefc2fcc6a255897d76cf8c9d5058fe6c8bd7c324af3345035bd/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-173:{mountpoint:/var/lib/containers/storage/overlay/1d65e45dfee7595fca28003f98074e1c33592333486177ac0e9b2a2cc05668bd/merged major:0 minor:173 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/740a5f33e0f61edcf7477c3d649971201e6c3fc576f2bcfb75c28dec71aebe5d/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/6ec9fadcc3a1823811a457869024ee9fe0a4984856fd0b4c1314dd76a130a230/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/fd77124c21370beae2d47d95c33eac47d97989c304dee1888d11427bf6724f54/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/0c059b548a463eb40e763dc48c0a1047e4b388a44bd010000f048af9ab01b274/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/76ace6528e458acc462ca65ee0fe34dd4a39399ed2bc28f9b8b7357f0c51387f/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/5c5cf5cc825321aabf201851d55753b80df430d9b554c3ec653d1bc7a8156156/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-227:{mountpoint:/var/lib/containers/storage/overlay/7ebf0858301167784d1d76837204ad39b2db08115ed23c6ed8cfd6caf15a2539/merged major:0 minor:227 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/000434fa0cb022fb5d67c5fa2d239af1b38fe05e76b055dc7ec1735a20cd6f2a/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-246:{mountpoint:/var/lib/containers/storage/overlay/a94f83ec3fc50b340fa40529f458619100f1246242c3b997a4ec5c2132865e02/merged major:0 minor:246 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/2279cf1b7dedadc7114b3a176874e7383386817b2805e38b348dde15ff437f85/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-251:{mountpoint:/var/lib/containers/storage/overlay/5efeb81c967d3474710392c84ac6402afd067890a5ccc87798916c004fa7b1f9/merged major:0 minor:251 fsType:overlay blockSize:0} overlay_0-260:{mountpoint:/var/lib/containers/storage/overlay/4e9472a86a27e2a2a5a974daf57a7a8bcef40e721b8928bd37100dec2b512427/merged major:0 minor:260 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/17a37bd7888ba6f733e11b5d9ae431d57f2c69459605fbcc224e39d78978bfd7/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/284b152d29c59be2a7d0b7c61d3406a9f4eac49e523afa553463ba28c4649d9f/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/09b3e1efe17842a851cd7555e10091f1e10f324e90514c7fd6349215180ec387/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/09162a6380376b19b0484b2c175550af0fb87c036d30e6869638abf76be7a3a2/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/728d2e01ae30b6867e9388ad30d23d3c84d5a634043016b71729fad3798ff376/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/6f61805b9ed1a7dc0269bc2716080128318b569348ce23121141779c0118afff/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/79e9d7e32b79d48434a84f9add8cdb04c4c2a41dd07686ee1d67f55251e6d199/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/a9455d6d62fffaa66e3f04e4f8ec97bac2968e73fe0a612f9a45a02094d6576c/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/6866284c970c12403790dd0db140caff0a792ae52caa096cb08dab660173ed0d/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/64a06ae73807ff5437fabf35ee00f8cc6e89dc609695eead306f53d317127efa/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/0ee0747f636e0287ec2be6fe333c77d7392ac34c132fbc64ff3f1c1d022c8573/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/a8808f61351cd2f1591ec3c2e25da62c9de1dc8aca0b5a9f6f5ffae47f9c1b5d/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/ba974d9ad20228bbfed8db9b746be2a67b4cbabba3854de22a9ff06e76595afb/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/e3e5b1174f7f5f5b35473b6e0efe88220eb3f56c2cabd12cc34d731a01dc2d6b/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/4b9dc1a3ad418c15def0bc0ee9fc7599461d191ae9873a5c8867a25274b1e43d/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/126bee45cd5c5a0c59c5c3f683cac3892b047167cf5671b79253c2ab27c16c08/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/9af82de5e3e8292e9f259a486fc5773d4214b3c0cc8874486ff63bc219082cff/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/ebabb805e1a0b3ccf21db0c8f1db330cf653ea3852c3f0ef8820f17a7f9dc048/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/e1e3f75a638ac905b73fd00c46d1ffa3218b2926b51f6870e3777c15ca374de6/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/53d23c088e79a218658d9a3cb468c4d23cffa491aca399c164bc18683b2443e7/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/157c860d309cb38a3266152f2cec6eeb9eb728a20149ea8464570b93c42c7af0/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/f9e4f89f9c61de14e8b3e8884d875235699701a7a8301bb37362c1996d8fe46a/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/b19e8d63e0319893a4fa4dfef4f1e7f6d394abaa7df7c9f2d92c8515a67e8f7e/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/f7a50d8d08b827fb76d9c00d21a0072fbe69f41d273577fb47b6ba7775578876/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/0426fe41eadea03021ed362ea5d44c61a9753ab06aab1b7491f5faad8a283052/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/fa30a35224ebf04b3f5b74f937b6fdd31bd316019307bba2fc1ac9d020bb4c76/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/b118398522c860fbd26498ae28bf992f64c34f9320492edd0b0c717b44152bbc/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/09c5d21c4e204688020704d569d44765f395dfc9e127955f55b27cdfd676e4e5/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/52a31185f67e410d64e54164e9ecf8863d891c184ee35590cf7fe8c3e451435c/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/62cc2aec484dc042f9f0de358ef8b1ac71bb6603426588f4aaf8bbe2864f001e/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/e229a06df9d106704023d1557f6d44bd12b817e0ee45a52aa493dc69108f70f5/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/4a8b9b964ae0b106bcaf4e89eacb6d2f49abcb4fbc5e183abfa3e19bf67e7d8c/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/e3c42b0306142b7fd125109a14d434f29cc4336dc10da90e4086f0364ea4c2d0/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/517314f3020ee5b846f722aded79a2b8a0ed7cf6e1aacd19356865eb1a60f40c/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/9d24b4239a0cf7fe1b20479fe2c006eb64ef8027b09e1fd3ff97e218b948abaa/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/b583cfa94b91996075195eb2b790d5c381a728e1985ad741f431b3f048ef35f7/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/f3d66257f207ef34673d82e60fc31daf4f79badf034d93758682b1a1e2a1570a/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/e871924d2cda38656ab6d6daef5a8983596945f0effbbe865e57d44206a87dbe/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/804317c1e34d157305a10f55f38e481655f4fe20cacb2ee14de88d1022b9fc01/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/57e3fba788e0f10e444395a9584f140adffdb7ed744de318324b5f3994562321/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/791ea21ba7726cd6548f2aff335d46079cb3f8273ba24b6eeb64af2eb9f4d3a5/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/6dd87a98f0c0350d7845a648c0a7446eb97631a09a0b0f5c82d22568f2ee5a9e/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/e80542971c4d2518d3d01ef14521f830b127ac4b189e9d38731933c61e100ba1/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/46f9e5a5f5fce397eb8dd2f3613c3efe4341530f9e64cdfd325cde4b308ea261/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/3bbac828a6b0b627c1168bfc0e9c6d4f5cd74b74711e67632b60116b0ee8e4da/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/5fee70083481fac133262f8bbfc00a7346b9e4f7ea77999d78db1e8245cb3dad/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/5a016faa71ace2a6f4b487db8814fed0f4afa5a744c8fc31809847d5e3f129bb/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/d5810a37b17cf2bfc90dbeb6f989a9fdb3d3e139f92695a09151dd19feca898e/merged major:0 minor:89 fsType:overlay blockSize:0}] Dec 03 14:08:19.269293 master-0 kubenswrapper[4387]: I1203 14:08:19.268557 4387 manager.go:217] Machine: {Timestamp:2025-12-03 14:08:19.267565678 +0000 UTC m=+0.123189347 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:764a923e-eafb-47f4-8635-9cb972b9b445 Filesystems:[{Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp DeviceMajor:0 DeviceMinor:214 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls DeviceMajor:0 DeviceMinor:350 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-227 DeviceMajor:0 DeviceMinor:227 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj DeviceMajor:0 DeviceMinor:382 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:197 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn DeviceMajor:0 DeviceMinor:361 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-173 DeviceMajor:0 DeviceMinor:173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:202 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm DeviceMajor:0 DeviceMinor:217 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:211 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:313 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:200 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-260 DeviceMajor:0 DeviceMinor:260 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:351 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm DeviceMajor:0 DeviceMinor:387 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:312 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm DeviceMajor:0 DeviceMinor:329 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:208 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:199 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:348 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:360 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:204 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:213 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm DeviceMajor:0 DeviceMinor:221 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 DeviceMajor:0 DeviceMinor:359 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 DeviceMajor:0 DeviceMinor:319 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j DeviceMajor:0 DeviceMinor:314 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx DeviceMajor:0 DeviceMinor:327 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg DeviceMajor:0 DeviceMinor:352 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl DeviceMajor:0 DeviceMinor:366 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q DeviceMajor:0 DeviceMinor:371 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:203 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-246 DeviceMajor:0 DeviceMinor:246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:195 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-251 DeviceMajor:0 DeviceMinor:251 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:209 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 DeviceMajor:0 DeviceMinor:216 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm DeviceMajor:0 DeviceMinor:68 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm DeviceMajor:0 DeviceMinor:322 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm DeviceMajor:0 DeviceMinor:362 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:318 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:198 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:205 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:210 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh DeviceMajor:0 DeviceMinor:215 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv DeviceMajor:0 DeviceMinor:311 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7 DeviceMajor:0 DeviceMinor:349 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm DeviceMajor:0 DeviceMinor:62 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:201 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf DeviceMajor:0 DeviceMinor:315 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm DeviceMajor:0 DeviceMinor:353 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:196 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:212 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm DeviceMajor:0 DeviceMinor:326 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:36:91:5c:9c:b9:c3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:08:19.269293 master-0 kubenswrapper[4387]: I1203 14:08:19.269155 4387 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:08:19.269714 master-0 kubenswrapper[4387]: I1203 14:08:19.269361 4387 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:08:19.269850 master-0 kubenswrapper[4387]: I1203 14:08:19.269814 4387 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:08:19.270053 master-0 kubenswrapper[4387]: I1203 14:08:19.269995 4387 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:08:19.270309 master-0 kubenswrapper[4387]: I1203 14:08:19.270048 4387 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:08:19.270368 master-0 kubenswrapper[4387]: I1203 14:08:19.270336 4387 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:08:19.270368 master-0 kubenswrapper[4387]: I1203 14:08:19.270350 4387 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:08:19.270368 master-0 kubenswrapper[4387]: I1203 14:08:19.270365 4387 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:08:19.270483 master-0 kubenswrapper[4387]: I1203 14:08:19.270394 4387 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:08:19.270592 master-0 kubenswrapper[4387]: I1203 14:08:19.270564 4387 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:08:19.270709 master-0 kubenswrapper[4387]: I1203 14:08:19.270685 4387 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:08:19.270795 master-0 kubenswrapper[4387]: I1203 14:08:19.270768 4387 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:08:19.270842 master-0 kubenswrapper[4387]: I1203 14:08:19.270797 4387 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:08:19.270842 master-0 kubenswrapper[4387]: I1203 14:08:19.270830 4387 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:08:19.270896 master-0 kubenswrapper[4387]: I1203 14:08:19.270849 4387 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:08:19.270896 master-0 kubenswrapper[4387]: I1203 14:08:19.270875 4387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:08:19.273061 master-0 kubenswrapper[4387]: I1203 14:08:19.273017 4387 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:08:19.273237 master-0 kubenswrapper[4387]: I1203 14:08:19.273204 4387 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:08:19.273698 master-0 kubenswrapper[4387]: I1203 14:08:19.273672 4387 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:08:19.275734 master-0 kubenswrapper[4387]: I1203 14:08:19.275702 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:08:19.275734 master-0 kubenswrapper[4387]: I1203 14:08:19.275734 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:08:19.275812 master-0 kubenswrapper[4387]: I1203 14:08:19.275747 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:08:19.275812 master-0 kubenswrapper[4387]: I1203 14:08:19.275758 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:08:19.275812 master-0 kubenswrapper[4387]: I1203 14:08:19.275768 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:08:19.275812 master-0 kubenswrapper[4387]: I1203 14:08:19.275779 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275836 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275849 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275858 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275868 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275894 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:08:19.275919 master-0 kubenswrapper[4387]: I1203 14:08:19.275909 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:08:19.276071 master-0 kubenswrapper[4387]: I1203 14:08:19.275960 4387 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:08:19.277099 master-0 kubenswrapper[4387]: I1203 14:08:19.277068 4387 server.go:1280] "Started kubelet" Dec 03 14:08:19.278830 master-0 kubenswrapper[4387]: I1203 14:08:19.278131 4387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:08:19.278830 master-0 kubenswrapper[4387]: I1203 14:08:19.278280 4387 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:08:19.279103 master-0 kubenswrapper[4387]: I1203 14:08:19.278928 4387 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:08:19.278918 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:08:19.279576 master-0 kubenswrapper[4387]: I1203 14:08:19.278943 4387 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:08:19.285478 master-0 kubenswrapper[4387]: I1203 14:08:19.285251 4387 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.290530 4387 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.290967 4387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291011 4387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291024 4387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 10:14:28.495024047 +0000 UTC Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291068 4387 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h6m9.2039581s for next certificate rotation Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291396 4387 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291460 4387 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291586 4387 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.291670 4387 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.293146 4387 factory.go:55] Registering systemd factory Dec 03 14:08:19.294040 master-0 kubenswrapper[4387]: I1203 14:08:19.293631 4387 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: E1203 14:08:19.294848 4387 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: I1203 14:08:19.295149 4387 factory.go:153] Registering CRI-O factory Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: I1203 14:08:19.295174 4387 factory.go:221] Registration of the crio container factory successfully Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: I1203 14:08:19.295317 4387 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: I1203 14:08:19.295361 4387 factory.go:103] Registering Raw factory Dec 03 14:08:19.295587 master-0 kubenswrapper[4387]: I1203 14:08:19.295389 4387 manager.go:1196] Started watching for new ooms in manager Dec 03 14:08:19.297025 master-0 kubenswrapper[4387]: I1203 14:08:19.295580 4387 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.297025 master-0 kubenswrapper[4387]: I1203 14:08:19.296338 4387 manager.go:319] Starting recovery of all containers Dec 03 14:08:19.301781 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:08:19.313791 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:08:19.314223 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:08:19.334024 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:08:19.447750 master-0 kubenswrapper[4430]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:08:19.448917 master-0 kubenswrapper[4430]: I1203 14:08:19.447845 4430 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:08:19.450635 master-0 kubenswrapper[4430]: W1203 14:08:19.450599 4430 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.450635 master-0 kubenswrapper[4430]: W1203 14:08:19.450629 4430 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450639 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450645 4430 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450650 4430 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450655 4430 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450660 4430 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450666 4430 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450670 4430 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450675 4430 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450680 4430 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450683 4430 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450698 4430 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450703 4430 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450707 4430 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450711 4430 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450716 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.450708 master-0 kubenswrapper[4430]: W1203 14:08:19.450722 4430 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450728 4430 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450733 4430 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450737 4430 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450741 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450746 4430 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450750 4430 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450753 4430 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450759 4430 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450764 4430 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450768 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450774 4430 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450780 4430 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450784 4430 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450788 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450793 4430 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450797 4430 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450802 4430 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450806 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.451121 master-0 kubenswrapper[4430]: W1203 14:08:19.450810 4430 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450813 4430 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450818 4430 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450822 4430 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450826 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450830 4430 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450835 4430 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450839 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450843 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450847 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450860 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450864 4430 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450868 4430 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450871 4430 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450875 4430 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450882 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450887 4430 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450891 4430 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450895 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450900 4430 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.451664 master-0 kubenswrapper[4430]: W1203 14:08:19.450904 4430 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450909 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450913 4430 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450917 4430 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450921 4430 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450925 4430 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450929 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450933 4430 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450938 4430 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450942 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450946 4430 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450950 4430 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450955 4430 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450961 4430 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450966 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: W1203 14:08:19.450971 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451108 4430 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451125 4430 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451141 4430 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451150 4430 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451157 4430 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:08:19.452178 master-0 kubenswrapper[4430]: I1203 14:08:19.451164 4430 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451173 4430 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451179 4430 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451184 4430 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451188 4430 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451203 4430 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451210 4430 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451216 4430 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451222 4430 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451227 4430 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451232 4430 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451238 4430 flags.go:64] FLAG: --cloud-config="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451242 4430 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451247 4430 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451256 4430 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451260 4430 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451264 4430 flags.go:64] FLAG: --config-dir="" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451268 4430 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451273 4430 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451280 4430 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451285 4430 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451290 4430 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451296 4430 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451301 4430 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:08:19.452720 master-0 kubenswrapper[4430]: I1203 14:08:19.451308 4430 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451313 4430 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451318 4430 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451324 4430 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451331 4430 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451336 4430 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451342 4430 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451346 4430 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451350 4430 flags.go:64] FLAG: --enable-server="true" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451355 4430 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451365 4430 flags.go:64] FLAG: --event-burst="100" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451369 4430 flags.go:64] FLAG: --event-qps="50" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451374 4430 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451379 4430 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451387 4430 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451431 4430 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451435 4430 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451448 4430 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451454 4430 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451459 4430 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451463 4430 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451468 4430 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451472 4430 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451477 4430 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451481 4430 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:08:19.453643 master-0 kubenswrapper[4430]: I1203 14:08:19.451485 4430 flags.go:64] FLAG: --feature-gates="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451490 4430 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451495 4430 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451499 4430 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451504 4430 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451508 4430 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451513 4430 flags.go:64] FLAG: --help="false" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451517 4430 flags.go:64] FLAG: --hostname-override="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451521 4430 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451525 4430 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451529 4430 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451534 4430 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451539 4430 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451544 4430 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451549 4430 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451553 4430 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451557 4430 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451562 4430 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451566 4430 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451572 4430 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451576 4430 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451581 4430 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451585 4430 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451590 4430 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451595 4430 flags.go:64] FLAG: --lock-file="" Dec 03 14:08:19.454255 master-0 kubenswrapper[4430]: I1203 14:08:19.451600 4430 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451605 4430 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451611 4430 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451628 4430 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451633 4430 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451637 4430 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451641 4430 flags.go:64] FLAG: --logging-format="text" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451646 4430 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451650 4430 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451654 4430 flags.go:64] FLAG: --manifest-url="" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451659 4430 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451667 4430 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451673 4430 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451680 4430 flags.go:64] FLAG: --max-pods="110" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451684 4430 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451689 4430 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451693 4430 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451698 4430 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451702 4430 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451707 4430 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451711 4430 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451722 4430 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451726 4430 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451730 4430 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:08:19.454883 master-0 kubenswrapper[4430]: I1203 14:08:19.451735 4430 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451739 4430 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451749 4430 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451754 4430 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451758 4430 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451763 4430 flags.go:64] FLAG: --port="10250" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451767 4430 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451771 4430 flags.go:64] FLAG: --provider-id="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451775 4430 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451780 4430 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451784 4430 flags.go:64] FLAG: --register-node="true" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451789 4430 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451793 4430 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451801 4430 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451806 4430 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451819 4430 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451823 4430 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451829 4430 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451834 4430 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451839 4430 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451844 4430 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451848 4430 flags.go:64] FLAG: --runonce="false" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451852 4430 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451856 4430 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451861 4430 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:08:19.455514 master-0 kubenswrapper[4430]: I1203 14:08:19.451865 4430 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451869 4430 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451873 4430 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451877 4430 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451881 4430 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451885 4430 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451889 4430 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451893 4430 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451897 4430 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451902 4430 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451906 4430 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451910 4430 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451917 4430 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451921 4430 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451925 4430 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451937 4430 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451942 4430 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451946 4430 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451951 4430 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451956 4430 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451961 4430 flags.go:64] FLAG: --v="2" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451967 4430 flags.go:64] FLAG: --version="false" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451976 4430 flags.go:64] FLAG: --vmodule="" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451983 4430 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: I1203 14:08:19.451988 4430 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:08:19.456131 master-0 kubenswrapper[4430]: W1203 14:08:19.452121 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452128 4430 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452133 4430 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452137 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452142 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452145 4430 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452149 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452153 4430 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452157 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452160 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452164 4430 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452168 4430 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452175 4430 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452179 4430 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452183 4430 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452187 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452191 4430 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452194 4430 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452198 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452203 4430 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.456766 master-0 kubenswrapper[4430]: W1203 14:08:19.452207 4430 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452211 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452216 4430 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452221 4430 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452226 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452230 4430 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452233 4430 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452237 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452241 4430 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452245 4430 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452249 4430 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452252 4430 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452257 4430 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452260 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452265 4430 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452269 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452273 4430 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452278 4430 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452282 4430 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.457243 master-0 kubenswrapper[4430]: W1203 14:08:19.452285 4430 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452289 4430 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452292 4430 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452297 4430 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452301 4430 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452306 4430 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452310 4430 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452314 4430 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452318 4430 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452322 4430 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452327 4430 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452330 4430 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452340 4430 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452344 4430 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452348 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452352 4430 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452358 4430 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452363 4430 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452368 4430 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452373 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.457724 master-0 kubenswrapper[4430]: W1203 14:08:19.452378 4430 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452383 4430 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452387 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452392 4430 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452396 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452402 4430 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452406 4430 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452410 4430 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452428 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452434 4430 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452438 4430 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452443 4430 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: W1203 14:08:19.452447 4430 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.458268 master-0 kubenswrapper[4430]: I1203 14:08:19.452456 4430 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.460972 master-0 kubenswrapper[4430]: I1203 14:08:19.460552 4430 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:08:19.460972 master-0 kubenswrapper[4430]: I1203 14:08:19.460966 4430 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:08:19.461082 master-0 kubenswrapper[4430]: W1203 14:08:19.461058 4430 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.461082 master-0 kubenswrapper[4430]: W1203 14:08:19.461068 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.461082 master-0 kubenswrapper[4430]: W1203 14:08:19.461074 4430 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.461082 master-0 kubenswrapper[4430]: W1203 14:08:19.461078 4430 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.461082 master-0 kubenswrapper[4430]: W1203 14:08:19.461084 4430 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461090 4430 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461096 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461103 4430 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461111 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461116 4430 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461121 4430 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461127 4430 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461133 4430 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461138 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461143 4430 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461147 4430 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461152 4430 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461156 4430 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461162 4430 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461168 4430 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461173 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461177 4430 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.461219 master-0 kubenswrapper[4430]: W1203 14:08:19.461182 4430 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461186 4430 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461191 4430 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461195 4430 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461199 4430 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461205 4430 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461210 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461214 4430 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461218 4430 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461223 4430 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461228 4430 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461233 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461238 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461243 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461248 4430 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461252 4430 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461257 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461262 4430 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461267 4430 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461271 4430 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.461762 master-0 kubenswrapper[4430]: W1203 14:08:19.461276 4430 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461280 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461284 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461289 4430 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461293 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461298 4430 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461302 4430 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461307 4430 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461311 4430 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461316 4430 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461320 4430 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461324 4430 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461330 4430 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461335 4430 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461343 4430 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461348 4430 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461353 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461357 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461362 4430 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461366 4430 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.462297 master-0 kubenswrapper[4430]: W1203 14:08:19.461370 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461375 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461380 4430 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461384 4430 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461388 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461392 4430 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461396 4430 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461400 4430 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461404 4430 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461408 4430 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: I1203 14:08:19.461434 4430 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461832 4430 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461850 4430 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461855 4430 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461860 4430 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:08:19.462952 master-0 kubenswrapper[4430]: W1203 14:08:19.461864 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461877 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461882 4430 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461886 4430 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461891 4430 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461896 4430 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461900 4430 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461905 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461909 4430 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461914 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461918 4430 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461924 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461929 4430 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461935 4430 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461944 4430 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461949 4430 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461954 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461959 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461965 4430 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461970 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:08:19.463324 master-0 kubenswrapper[4430]: W1203 14:08:19.461974 4430 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.461978 4430 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.461986 4430 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.461992 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.461996 4430 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462001 4430 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462011 4430 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462016 4430 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462020 4430 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462025 4430 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462029 4430 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462034 4430 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462038 4430 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462043 4430 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462048 4430 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462053 4430 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462057 4430 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462061 4430 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462066 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462076 4430 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:08:19.463859 master-0 kubenswrapper[4430]: W1203 14:08:19.462080 4430 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462085 4430 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462090 4430 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462096 4430 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462101 4430 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462106 4430 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462110 4430 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462115 4430 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462119 4430 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462125 4430 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462129 4430 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462139 4430 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462146 4430 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462151 4430 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462156 4430 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462162 4430 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462168 4430 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462173 4430 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462181 4430 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:08:19.464328 master-0 kubenswrapper[4430]: W1203 14:08:19.462186 4430 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462191 4430 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462196 4430 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462200 4430 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462209 4430 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462214 4430 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462220 4430 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462432 4430 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: W1203 14:08:19.462439 4430 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: I1203 14:08:19.462449 4430 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:08:19.464834 master-0 kubenswrapper[4430]: I1203 14:08:19.462809 4430 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:08:19.467145 master-0 kubenswrapper[4430]: I1203 14:08:19.467081 4430 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:08:19.467338 master-0 kubenswrapper[4430]: I1203 14:08:19.467305 4430 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:08:19.467986 master-0 kubenswrapper[4430]: I1203 14:08:19.467957 4430 server.go:997] "Starting client certificate rotation" Dec 03 14:08:19.467986 master-0 kubenswrapper[4430]: I1203 14:08:19.467979 4430 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:08:19.468475 master-0 kubenswrapper[4430]: I1203 14:08:19.468386 4430 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 08:45:31.938657266 +0000 UTC Dec 03 14:08:19.468523 master-0 kubenswrapper[4430]: I1203 14:08:19.468473 4430 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h37m12.47018765s for next certificate rotation Dec 03 14:08:19.468758 master-0 kubenswrapper[4430]: I1203 14:08:19.468725 4430 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:08:19.470337 master-0 kubenswrapper[4430]: I1203 14:08:19.470300 4430 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:08:19.474866 master-0 kubenswrapper[4430]: I1203 14:08:19.474814 4430 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:08:19.481332 master-0 kubenswrapper[4430]: I1203 14:08:19.481256 4430 log.go:25] "Validated CRI v1 image API" Dec 03 14:08:19.483507 master-0 kubenswrapper[4430]: I1203 14:08:19.483138 4430 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:08:19.489270 master-0 kubenswrapper[4430]: I1203 14:08:19.489183 4430 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:08:19.489738 master-0 kubenswrapper[4430]: I1203 14:08:19.489251 4430 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm major:0 minor:353 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm major:0 minor:217 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm major:0 minor:221 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm major:0 minor:326 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm major:0 minor:322 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm major:0 minor:362 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm major:0 minor:287 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm major:0 minor:387 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm major:0 minor:329 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm major:0 minor:68 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm major:0 minor:291 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm major:0 minor:62 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:313 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9:{mountpoint:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp major:0 minor:198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf:{mountpoint:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf major:0 minor:315 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl:{mountpoint:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j major:0 minor:314 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv:{mountpoint:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv major:0 minor:311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2:{mountpoint:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 major:0 minor:359 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:351 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4:{mountpoint:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 major:0 minor:319 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg:{mountpoint:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg major:0 minor:352 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q:{mountpoint:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q major:0 minor:371 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls major:0 minor:350 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn:{mountpoint:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn major:0 minor:361 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:348 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:195 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj:{mountpoint:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj major:0 minor:382 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:271 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:360 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx major:0 minor:327 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs major:0 minor:196 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7:{mountpoint:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7 major:0 minor:349 fsType:tmpfs blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/06efd20e5e2a9b7e172efeca39d2dea7ac968e5a88d0e1d95e55c8fcbbcf94c1/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-113:{mountpoint:/var/lib/containers/storage/overlay/be05bff67b44502e94e065df53bd887d4cc895b98b360cf29ea7ea7ed16edef5/merged major:0 minor:113 fsType:overlay blockSize:0} overlay_0-131:{mountpoint:/var/lib/containers/storage/overlay/15a92ce52afbb72f98fb4ef195bb8a01eec6b31443e3e1eccd5d4ac77becdea5/merged major:0 minor:131 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/7cc39c503d868e009fa7329d4314b52878f872c4c7aee87a28c4b95d14783061/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/f0879cef70295ecbf90aa73204b488d707aa0573b4ee954fc0ec3bf17cecacfd/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/a8551e2fe6db269b264ddbce879fb852cbd5221efe431f55f4a51d2123efc516/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-151:{mountpoint:/var/lib/containers/storage/overlay/ccf7b687ce6e8526d5911a4c547202d3b89bfd09b77166ab6b13962f29bb9277/merged major:0 minor:151 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/ad21089139acfefc2fcc6a255897d76cf8c9d5058fe6c8bd7c324af3345035bd/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-173:{mountpoint:/var/lib/containers/storage/overlay/1d65e45dfee7595fca28003f98074e1c33592333486177ac0e9b2a2cc05668bd/merged major:0 minor:173 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/740a5f33e0f61edcf7477c3d649971201e6c3fc576f2bcfb75c28dec71aebe5d/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/6ec9fadcc3a1823811a457869024ee9fe0a4984856fd0b4c1314dd76a130a230/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-188:{mountpoint:/var/lib/containers/storage/overlay/fd77124c21370beae2d47d95c33eac47d97989c304dee1888d11427bf6724f54/merged major:0 minor:188 fsType:overlay blockSize:0} overlay_0-219:{mountpoint:/var/lib/containers/storage/overlay/0c059b548a463eb40e763dc48c0a1047e4b388a44bd010000f048af9ab01b274/merged major:0 minor:219 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/76ace6528e458acc462ca65ee0fe34dd4a39399ed2bc28f9b8b7357f0c51387f/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/5c5cf5cc825321aabf201851d55753b80df430d9b554c3ec653d1bc7a8156156/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-227:{mountpoint:/var/lib/containers/storage/overlay/7ebf0858301167784d1d76837204ad39b2db08115ed23c6ed8cfd6caf15a2539/merged major:0 minor:227 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/000434fa0cb022fb5d67c5fa2d239af1b38fe05e76b055dc7ec1735a20cd6f2a/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-246:{mountpoint:/var/lib/containers/storage/overlay/a94f83ec3fc50b340fa40529f458619100f1246242c3b997a4ec5c2132865e02/merged major:0 minor:246 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/2279cf1b7dedadc7114b3a176874e7383386817b2805e38b348dde15ff437f85/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-251:{mountpoint:/var/lib/containers/storage/overlay/5efeb81c967d3474710392c84ac6402afd067890a5ccc87798916c004fa7b1f9/merged major:0 minor:251 fsType:overlay blockSize:0} overlay_0-260:{mountpoint:/var/lib/containers/storage/overlay/4e9472a86a27e2a2a5a974daf57a7a8bcef40e721b8928bd37100dec2b512427/merged major:0 minor:260 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/17a37bd7888ba6f733e11b5d9ae431d57f2c69459605fbcc224e39d78978bfd7/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/284b152d29c59be2a7d0b7c61d3406a9f4eac49e523afa553463ba28c4649d9f/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/09b3e1efe17842a851cd7555e10091f1e10f324e90514c7fd6349215180ec387/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/09162a6380376b19b0484b2c175550af0fb87c036d30e6869638abf76be7a3a2/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/728d2e01ae30b6867e9388ad30d23d3c84d5a634043016b71729fad3798ff376/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/6f61805b9ed1a7dc0269bc2716080128318b569348ce23121141779c0118afff/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/79e9d7e32b79d48434a84f9add8cdb04c4c2a41dd07686ee1d67f55251e6d199/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/a9455d6d62fffaa66e3f04e4f8ec97bac2968e73fe0a612f9a45a02094d6576c/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/6866284c970c12403790dd0db140caff0a792ae52caa096cb08dab660173ed0d/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/64a06ae73807ff5437fabf35ee00f8cc6e89dc609695eead306f53d317127efa/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/0ee0747f636e0287ec2be6fe333c77d7392ac34c132fbc64ff3f1c1d022c8573/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/a8808f61351cd2f1591ec3c2e25da62c9de1dc8aca0b5a9f6f5ffae47f9c1b5d/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/ba974d9ad20228bbfed8db9b746be2a67b4cbabba3854de22a9ff06e76595afb/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/e3e5b1174f7f5f5b35473b6e0efe88220eb3f56c2cabd12cc34d731a01dc2d6b/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-338:{mountpoint:/var/lib/containers/storage/overlay/4b9dc1a3ad418c15def0bc0ee9fc7599461d191ae9873a5c8867a25274b1e43d/merged major:0 minor:338 fsType:overlay blockSize:0} overlay_0-340:{mountpoint:/var/lib/containers/storage/overlay/126bee45cd5c5a0c59c5c3f683cac3892b047167cf5671b79253c2ab27c16c08/merged major:0 minor:340 fsType:overlay blockSize:0} overlay_0-344:{mountpoint:/var/lib/containers/storage/overlay/9af82de5e3e8292e9f259a486fc5773d4214b3c0cc8874486ff63bc219082cff/merged major:0 minor:344 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/ebabb805e1a0b3ccf21db0c8f1db330cf653ea3852c3f0ef8820f17a7f9dc048/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/e1e3f75a638ac905b73fd00c46d1ffa3218b2926b51f6870e3777c15ca374de6/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/53d23c088e79a218658d9a3cb468c4d23cffa491aca399c164bc18683b2443e7/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/157c860d309cb38a3266152f2cec6eeb9eb728a20149ea8464570b93c42c7af0/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/f9e4f89f9c61de14e8b3e8884d875235699701a7a8301bb37362c1996d8fe46a/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/b19e8d63e0319893a4fa4dfef4f1e7f6d394abaa7df7c9f2d92c8515a67e8f7e/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/f7a50d8d08b827fb76d9c00d21a0072fbe69f41d273577fb47b6ba7775578876/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/0426fe41eadea03021ed362ea5d44c61a9753ab06aab1b7491f5faad8a283052/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/fa30a35224ebf04b3f5b74f937b6fdd31bd316019307bba2fc1ac9d020bb4c76/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/b118398522c860fbd26498ae28bf992f64c34f9320492edd0b0c717b44152bbc/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/09c5d21c4e204688020704d569d44765f395dfc9e127955f55b27cdfd676e4e5/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/52a31185f67e410d64e54164e9ecf8863d891c184ee35590cf7fe8c3e451435c/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/62cc2aec484dc042f9f0de358ef8b1ac71bb6603426588f4aaf8bbe2864f001e/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/e229a06df9d106704023d1557f6d44bd12b817e0ee45a52aa493dc69108f70f5/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/4a8b9b964ae0b106bcaf4e89eacb6d2f49abcb4fbc5e183abfa3e19bf67e7d8c/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/e3c42b0306142b7fd125109a14d434f29cc4336dc10da90e4086f0364ea4c2d0/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/517314f3020ee5b846f722aded79a2b8a0ed7cf6e1aacd19356865eb1a60f40c/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/9d24b4239a0cf7fe1b20479fe2c006eb64ef8027b09e1fd3ff97e218b948abaa/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/b583cfa94b91996075195eb2b790d5c381a728e1985ad741f431b3f048ef35f7/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/f3d66257f207ef34673d82e60fc31daf4f79badf034d93758682b1a1e2a1570a/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/e871924d2cda38656ab6d6daef5a8983596945f0effbbe865e57d44206a87dbe/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/804317c1e34d157305a10f55f38e481655f4fe20cacb2ee14de88d1022b9fc01/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/57e3fba788e0f10e444395a9584f140adffdb7ed744de318324b5f3994562321/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/791ea21ba7726cd6548f2aff335d46079cb3f8273ba24b6eeb64af2eb9f4d3a5/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/6dd87a98f0c0350d7845a648c0a7446eb97631a09a0b0f5c82d22568f2ee5a9e/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/e80542971c4d2518d3d01ef14521f830b127ac4b189e9d38731933c61e100ba1/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/46f9e5a5f5fce397eb8dd2f3613c3efe4341530f9e64cdfd325cde4b308ea261/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/3bbac828a6b0b627c1168bfc0e9c6d4f5cd74b74711e67632b60116b0ee8e4da/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/5fee70083481fac133262f8bbfc00a7346b9e4f7ea77999d78db1e8245cb3dad/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/5a016faa71ace2a6f4b487db8814fed0f4afa5a744c8fc31809847d5e3f129bb/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/d5810a37b17cf2bfc90dbeb6f989a9fdb3d3e139f92695a09151dd19feca898e/merged major:0 minor:89 fsType:overlay blockSize:0}] Dec 03 14:08:19.523052 master-0 kubenswrapper[4430]: I1203 14:08:19.522315 4430 manager.go:217] Machine: {Timestamp:2025-12-03 14:08:19.521183767 +0000 UTC m=+0.144097863 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:764a923e-eafb-47f4-8635-9cb972b9b445 Filesystems:[{Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:199 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:202 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99827d7a12ac94cbfa3b92081e32a1ff678ea0543112f35162b725da60d7e266/userdata/shm DeviceMajor:0 DeviceMinor:329 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:348 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:203 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx DeviceMajor:0 DeviceMinor:327 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-340 DeviceMajor:0 DeviceMinor:340 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:196 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes/kubernetes.io~projected/kube-api-access-9rtlf DeviceMajor:0 DeviceMinor:315 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-173 DeviceMajor:0 DeviceMinor:173 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d881a7c0337c15dff2ae9ce084cd637f4944097da9ea45c54c8c6072f6028875/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg DeviceMajor:0 DeviceMinor:352 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e57121e16a1c4ce0b56e17ca5c970c909463062c282a02653437f48fca502467/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-188 DeviceMajor:0 DeviceMinor:188 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-219 DeviceMajor:0 DeviceMinor:219 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4f777b4a9f01e279eb75a9721c018e9ede56a033088181293b6027252f128e8/userdata/shm DeviceMajor:0 DeviceMinor:291 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 DeviceMajor:0 DeviceMinor:359 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-113 DeviceMajor:0 DeviceMinor:113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j DeviceMajor:0 DeviceMinor:314 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8988aae215062a9abc9a07405e3b79f4556db862ba019c8b074285ccd1d4ac90/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:204 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6ec91452865f313b8e8da79ca1cf4150dda15d26f7b9df21f8a71b4378e1baa5/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:205 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:208 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0d6c98597a39324de4c2581e2f27a2a59c93e5feb59031085d5c0459aa6b6083/userdata/shm DeviceMajor:0 DeviceMinor:221 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-246 DeviceMajor:0 DeviceMinor:246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-344 DeviceMajor:0 DeviceMinor:344 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj DeviceMajor:0 DeviceMinor:382 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b94e7894211e643b204498a1625e46ba0e6ebd8376c4dd9b27bf26fd06fac2d/userdata/shm DeviceMajor:0 DeviceMinor:287 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 DeviceMajor:0 DeviceMinor:319 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl DeviceMajor:0 DeviceMinor:366 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:271 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-131 DeviceMajor:0 DeviceMinor:131 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21cc16ab3a9bcd842f6740b10d1a3f4ee512c1baef6a6489ec605658e0c61bb3/userdata/shm DeviceMajor:0 DeviceMinor:322 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q DeviceMajor:0 DeviceMinor:371 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:312 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0b49c80be4781670f484c491103443927ec9a517060ddbe5f0e3c3e59abc9dc9/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~projected/kube-api-access-52zj7 DeviceMajor:0 DeviceMinor:349 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:313 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh DeviceMajor:0 DeviceMinor:215 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-338 DeviceMajor:0 DeviceMinor:338 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251/userdata/shm DeviceMajor:0 DeviceMinor:62 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:198 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-251 DeviceMajor:0 DeviceMinor:251 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:318 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-227 DeviceMajor:0 DeviceMinor:227 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84a5ea292fbaff5e94b105a789e091a4de4e0e578e7ee5769493be1f6ff174e5/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:200 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:360 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes/kubernetes.io~projected/kube-api-access-hxscv DeviceMajor:0 DeviceMinor:311 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-151 DeviceMajor:0 DeviceMinor:151 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:209 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:213 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8/userdata/shm DeviceMajor:0 DeviceMinor:68 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:212 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8668774aa42365d25bd0a01cd8e99561fe2c61999e02fbdeeb73544ee3756139/userdata/shm DeviceMajor:0 DeviceMinor:387 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls DeviceMajor:0 DeviceMinor:350 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06606de575857052f928ef38aff7fe99c9965f313339d7e732c1e7df3e65abe8/userdata/shm DeviceMajor:0 DeviceMinor:353 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:197 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-260 DeviceMajor:0 DeviceMinor:260 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ad1d12d9ce577d5aaceb2960067a81fa5876f9b13140850d4e641b82be39fd8/userdata/shm DeviceMajor:0 DeviceMinor:362 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 DeviceMajor:0 DeviceMinor:216 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn DeviceMajor:0 DeviceMinor:361 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp DeviceMajor:0 DeviceMinor:214 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0bb625d932bcee6989b21302ccee75626889a241a81c158d0837df4e026d7718/userdata/shm DeviceMajor:0 DeviceMinor:217 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:195 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:242 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1efd357cde0fb1b3a9959ff2678df4fdf6f7f40371d3dc1cd5538c9627455c00/userdata/shm DeviceMajor:0 DeviceMinor:326 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:201 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:210 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:211 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:351 Capacity:49335554048 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:36:91:5c:9c:b9:c3 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:08:19.523052 master-0 kubenswrapper[4430]: I1203 14:08:19.522945 4430 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:08:19.523509 master-0 kubenswrapper[4430]: I1203 14:08:19.523098 4430 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:08:19.523509 master-0 kubenswrapper[4430]: I1203 14:08:19.523345 4430 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:08:19.523579 master-0 kubenswrapper[4430]: I1203 14:08:19.523549 4430 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:08:19.523802 master-0 kubenswrapper[4430]: I1203 14:08:19.523579 4430 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:08:19.523870 master-0 kubenswrapper[4430]: I1203 14:08:19.523818 4430 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:08:19.523870 master-0 kubenswrapper[4430]: I1203 14:08:19.523830 4430 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:08:19.523870 master-0 kubenswrapper[4430]: I1203 14:08:19.523840 4430 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:08:19.523870 master-0 kubenswrapper[4430]: I1203 14:08:19.523867 4430 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:08:19.524106 master-0 kubenswrapper[4430]: I1203 14:08:19.524083 4430 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:08:19.524225 master-0 kubenswrapper[4430]: I1203 14:08:19.524194 4430 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:08:19.524285 master-0 kubenswrapper[4430]: I1203 14:08:19.524275 4430 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:08:19.524315 master-0 kubenswrapper[4430]: I1203 14:08:19.524291 4430 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:08:19.524315 master-0 kubenswrapper[4430]: I1203 14:08:19.524307 4430 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:08:19.524365 master-0 kubenswrapper[4430]: I1203 14:08:19.524319 4430 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:08:19.524365 master-0 kubenswrapper[4430]: I1203 14:08:19.524340 4430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:08:19.525984 master-0 kubenswrapper[4430]: I1203 14:08:19.525917 4430 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:08:19.526435 master-0 kubenswrapper[4430]: I1203 14:08:19.526377 4430 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:08:19.527075 master-0 kubenswrapper[4430]: I1203 14:08:19.526815 4430 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:08:19.528045 master-0 kubenswrapper[4430]: I1203 14:08:19.528004 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:08:19.528045 master-0 kubenswrapper[4430]: I1203 14:08:19.528043 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528053 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528063 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528072 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528081 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528090 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528112 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528123 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:08:19.528133 master-0 kubenswrapper[4430]: I1203 14:08:19.528133 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:08:19.528365 master-0 kubenswrapper[4430]: I1203 14:08:19.528153 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:08:19.528365 master-0 kubenswrapper[4430]: I1203 14:08:19.528168 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:08:19.528365 master-0 kubenswrapper[4430]: I1203 14:08:19.528202 4430 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:08:19.528741 master-0 kubenswrapper[4430]: I1203 14:08:19.528694 4430 server.go:1280] "Started kubelet" Dec 03 14:08:19.528885 master-0 kubenswrapper[4430]: I1203 14:08:19.528814 4430 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:08:19.529010 master-0 kubenswrapper[4430]: I1203 14:08:19.528872 4430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:08:19.529076 master-0 kubenswrapper[4430]: I1203 14:08:19.529051 4430 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:08:19.529664 master-0 kubenswrapper[4430]: I1203 14:08:19.529623 4430 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:08:19.530751 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:08:19.539872 master-0 kubenswrapper[4430]: I1203 14:08:19.539814 4430 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:08:19.540496 master-0 kubenswrapper[4430]: I1203 14:08:19.540455 4430 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.541388 master-0 kubenswrapper[4430]: I1203 14:08:19.541362 4430 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.542900 master-0 kubenswrapper[4430]: I1203 14:08:19.542848 4430 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:08:19.543001 master-0 kubenswrapper[4430]: I1203 14:08:19.542909 4430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:08:19.543054 master-0 kubenswrapper[4430]: I1203 14:08:19.543011 4430 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 09:58:13.014253303 +0000 UTC Dec 03 14:08:19.543054 master-0 kubenswrapper[4430]: I1203 14:08:19.543052 4430 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h49m53.471203474s for next certificate rotation Dec 03 14:08:19.543462 master-0 kubenswrapper[4430]: E1203 14:08:19.543408 4430 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 14:08:19.543730 master-0 kubenswrapper[4430]: I1203 14:08:19.543684 4430 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:08:19.543730 master-0 kubenswrapper[4430]: I1203 14:08:19.543726 4430 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:08:19.544029 master-0 kubenswrapper[4430]: I1203 14:08:19.543748 4430 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:08:19.544414 master-0 kubenswrapper[4430]: I1203 14:08:19.544391 4430 factory.go:153] Registering CRI-O factory Dec 03 14:08:19.544554 master-0 kubenswrapper[4430]: I1203 14:08:19.544537 4430 factory.go:221] Registration of the crio container factory successfully Dec 03 14:08:19.544801 master-0 kubenswrapper[4430]: I1203 14:08:19.544783 4430 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:08:19.544916 master-0 kubenswrapper[4430]: I1203 14:08:19.544902 4430 factory.go:55] Registering systemd factory Dec 03 14:08:19.544990 master-0 kubenswrapper[4430]: I1203 14:08:19.544979 4430 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:08:19.545087 master-0 kubenswrapper[4430]: I1203 14:08:19.545077 4430 factory.go:103] Registering Raw factory Dec 03 14:08:19.545546 master-0 kubenswrapper[4430]: I1203 14:08:19.545535 4430 manager.go:1196] Started watching for new ooms in manager Dec 03 14:08:19.545963 master-0 kubenswrapper[4430]: I1203 14:08:19.545934 4430 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.546316 master-0 kubenswrapper[4430]: I1203 14:08:19.546303 4430 manager.go:319] Starting recovery of all containers Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563560 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume" seLinuxMountContext="" Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563626 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs" seLinuxMountContext="" Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563642 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563658 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563672 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm" seLinuxMountContext="" Dec 03 14:08:19.563650 master-0 kubenswrapper[4430]: I1203 14:08:19.563685 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563700 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563713 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563728 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563742 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563757 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563770 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563782 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563799 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563813 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563836 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563853 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563866 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563879 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563891 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563904 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563915 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563929 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563941 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563952 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563965 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563982 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.563996 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.564008 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.564020 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.564031 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.564046 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:08:19.564036 master-0 kubenswrapper[4430]: I1203 14:08:19.564058 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564086 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564097 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564108 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564122 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564136 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564147 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564158 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564170 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564182 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564194 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564208 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42c95e54-b4ba-4b19-a97c-abcec840ac5d" volumeName="kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564220 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564231 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564244 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564257 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564269 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0a2889-39a5-471e-bd46-958e2f8eacaa" volumeName="kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564281 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564291 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564305 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564323 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564336 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564349 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564359 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564374 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564388 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564402 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564447 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564463 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564476 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564489 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564501 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" volumeName="kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564513 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564526 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564539 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564555 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564567 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564579 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564592 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564604 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564618 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564629 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564640 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564651 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564662 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564675 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564687 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564701 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564711 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564722 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564734 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564746 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564758 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564768 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564779 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564789 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564801 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564811 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564823 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564835 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564846 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564858 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564871 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564883 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564893 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564904 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" volumeName="kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564916 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564927 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564942 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564953 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564966 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564978 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.564998 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565011 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565023 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565037 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565052 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565064 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565076 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565089 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565102 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565115 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565126 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565137 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565151 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565163 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" volumeName="kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565175 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565188 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b02244d0-f4ef-4702-950d-9e3fb5ced128" volumeName="kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565202 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565215 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565227 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565240 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565252 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565264 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565278 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565290 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 14:08:19.565167 master-0 kubenswrapper[4430]: I1203 14:08:19.565302 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565314 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565326 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565338 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565350 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565363 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565372 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565384 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565399 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565412 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565440 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565457 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565469 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565481 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565491 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565502 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565515 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565526 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565538 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565549 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565561 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565573 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565584 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" volumeName="kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565596 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565609 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565619 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565632 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565645 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565658 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565669 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565681 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565693 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565704 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565716 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565745 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565756 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565766 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565777 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565789 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565800 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565814 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565826 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565837 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565850 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565861 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565872 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565884 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565896 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565907 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565918 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565929 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565941 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565952 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565963 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565975 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565986 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.565997 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566010 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566020 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566029 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566040 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566053 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566064 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566075 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566087 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566098 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566109 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566120 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566132 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566142 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566157 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566167 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566279 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566294 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566306 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566317 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566329 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566342 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566354 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566365 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566383 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566396 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566410 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566437 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566454 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566469 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566486 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566499 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566512 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566526 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566542 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566562 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566572 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566584 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566598 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566612 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566625 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566660 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566675 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566689 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566699 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566712 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566724 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566735 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566749 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566761 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566790 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566805 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566818 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566830 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566843 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566858 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566870 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566882 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566894 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566910 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566933 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566947 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566963 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566977 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.566992 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567007 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567023 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567043 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567058 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567071 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567082 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567093 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567104 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567119 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567136 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567149 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567163 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567179 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567196 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22673f47-9484-4eed-bbce-888588c754ed" volumeName="kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567209 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567225 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567237 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4dd1d142-6569-438d-b0c2-582aed44812d" volumeName="kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567249 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567262 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567274 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d3200abb-a440-44db-8897-79c809c1d838" volumeName="kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567289 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f723d97-5c65-4ae7-9085-26db8b4f2f52" volumeName="kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567337 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567350 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567386 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567399 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567411 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567448 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567471 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567507 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567522 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567534 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567547 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567559 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567570 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567585 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567596 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567614 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567625 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567638 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567649 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567660 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567672 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567687 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567697 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567710 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567721 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567737 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567750 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567760 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567771 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567785 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567796 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs" seLinuxMountContext="" Dec 03 14:08:19.568963 master-0 kubenswrapper[4430]: I1203 14:08:19.567807 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6cfc08c2-f287-40b8-bf28-4f884595e93c" volumeName="kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567816 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567828 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567838 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567856 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567872 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ba502ba-1179-478e-b4b9-f3409320b0ad" volumeName="kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567887 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567901 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62f94ae7-6043-4761-a16b-e0f072b1364b" volumeName="kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567915 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567925 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567935 4430 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff21a9a5-706f-4c71-bd0c-5586374f819a" volumeName="kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca" seLinuxMountContext="" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567947 4430 reconstruct.go:97] "Volume reconstruction finished" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.567954 4430 reconciler.go:26] "Reconciler: start to sync state" Dec 03 14:08:19.579728 master-0 kubenswrapper[4430]: I1203 14:08:19.573162 4430 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 14:08:19.580227 master-0 kubenswrapper[4430]: I1203 14:08:19.580065 4430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 14:08:19.582918 master-0 kubenswrapper[4430]: I1203 14:08:19.582848 4430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 14:08:19.583016 master-0 kubenswrapper[4430]: I1203 14:08:19.582928 4430 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 14:08:19.583016 master-0 kubenswrapper[4430]: I1203 14:08:19.582958 4430 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 14:08:19.583090 master-0 kubenswrapper[4430]: E1203 14:08:19.583020 4430 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 14:08:19.586524 master-0 kubenswrapper[4430]: I1203 14:08:19.586480 4430 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:08:19.589240 master-0 kubenswrapper[4430]: I1203 14:08:19.589182 4430 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="574a1200efdf4a517bb40025bec2aff5c6d2270f8ea9365cef5bff5b426b3524" exitCode=0 Dec 03 14:08:19.592825 master-0 kubenswrapper[4430]: I1203 14:08:19.592769 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="93043e69ade19a76194367d0e479728ca1d60e88105dc3caf6f3be29dbabbc6a" exitCode=0 Dec 03 14:08:19.602622 master-0 kubenswrapper[4430]: I1203 14:08:19.602546 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="cc112e6842d5a1677f57d5cb903a1e5d6f4646550a794d787fb3ec9cc8aeb9a3" exitCode=0 Dec 03 14:08:19.610951 master-0 kubenswrapper[4430]: I1203 14:08:19.610872 4430 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060" exitCode=1 Dec 03 14:08:19.614171 master-0 kubenswrapper[4430]: I1203 14:08:19.614109 4430 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864" exitCode=0 Dec 03 14:08:19.618591 master-0 kubenswrapper[4430]: I1203 14:08:19.618521 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f" exitCode=0 Dec 03 14:08:19.634802 master-0 kubenswrapper[4430]: I1203 14:08:19.634728 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" exitCode=0 Dec 03 14:08:19.634802 master-0 kubenswrapper[4430]: I1203 14:08:19.634772 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" exitCode=0 Dec 03 14:08:19.634802 master-0 kubenswrapper[4430]: I1203 14:08:19.634780 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" exitCode=0 Dec 03 14:08:19.683232 master-0 kubenswrapper[4430]: E1203 14:08:19.683146 4430 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 14:08:19.703984 master-0 kubenswrapper[4430]: I1203 14:08:19.703924 4430 manager.go:324] Recovery completed Dec 03 14:08:19.743983 master-0 kubenswrapper[4430]: I1203 14:08:19.743919 4430 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 14:08:19.743983 master-0 kubenswrapper[4430]: I1203 14:08:19.743949 4430 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 14:08:19.743983 master-0 kubenswrapper[4430]: I1203 14:08:19.743975 4430 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:08:19.744370 master-0 kubenswrapper[4430]: I1203 14:08:19.744182 4430 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 03 14:08:19.744370 master-0 kubenswrapper[4430]: I1203 14:08:19.744194 4430 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 03 14:08:19.744370 master-0 kubenswrapper[4430]: I1203 14:08:19.744246 4430 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Dec 03 14:08:19.744370 master-0 kubenswrapper[4430]: I1203 14:08:19.744253 4430 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Dec 03 14:08:19.744370 master-0 kubenswrapper[4430]: I1203 14:08:19.744259 4430 policy_none.go:49] "None policy: Start" Dec 03 14:08:19.746371 master-0 kubenswrapper[4430]: I1203 14:08:19.746316 4430 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 14:08:19.746530 master-0 kubenswrapper[4430]: I1203 14:08:19.746394 4430 state_mem.go:35] "Initializing new in-memory state store" Dec 03 14:08:19.746800 master-0 kubenswrapper[4430]: I1203 14:08:19.746768 4430 state_mem.go:75] "Updated machine memory state" Dec 03 14:08:19.746800 master-0 kubenswrapper[4430]: I1203 14:08:19.746794 4430 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Dec 03 14:08:19.757605 master-0 kubenswrapper[4430]: I1203 14:08:19.757543 4430 manager.go:334] "Starting Device Plugin manager" Dec 03 14:08:19.757750 master-0 kubenswrapper[4430]: I1203 14:08:19.757632 4430 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 14:08:19.757750 master-0 kubenswrapper[4430]: I1203 14:08:19.757655 4430 server.go:79] "Starting device plugin registration server" Dec 03 14:08:19.758270 master-0 kubenswrapper[4430]: I1203 14:08:19.758219 4430 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 14:08:19.758270 master-0 kubenswrapper[4430]: I1203 14:08:19.758240 4430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 14:08:19.758562 master-0 kubenswrapper[4430]: I1203 14:08:19.758468 4430 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 14:08:19.758735 master-0 kubenswrapper[4430]: I1203 14:08:19.758686 4430 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 14:08:19.758735 master-0 kubenswrapper[4430]: I1203 14:08:19.758706 4430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 14:08:19.760064 master-0 kubenswrapper[4430]: E1203 14:08:19.760010 4430 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:19.859140 master-0 kubenswrapper[4430]: I1203 14:08:19.858489 4430 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:08:19.860965 master-0 kubenswrapper[4430]: I1203 14:08:19.860915 4430 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:08:19.861083 master-0 kubenswrapper[4430]: I1203 14:08:19.860972 4430 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:08:19.861083 master-0 kubenswrapper[4430]: I1203 14:08:19.860986 4430 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:08:19.861271 master-0 kubenswrapper[4430]: I1203 14:08:19.861242 4430 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:08:19.874250 master-0 kubenswrapper[4430]: I1203 14:08:19.874166 4430 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 14:08:19.874581 master-0 kubenswrapper[4430]: I1203 14:08:19.874298 4430 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 14:08:19.875739 master-0 kubenswrapper[4430]: I1203 14:08:19.875701 4430 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:08:19.875846 master-0 kubenswrapper[4430]: I1203 14:08:19.875742 4430 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:08:19Z","lastTransitionTime":"2025-12-03T14:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:08:19.883595 master-0 kubenswrapper[4430]: I1203 14:08:19.883457 4430 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:08:19.885296 master-0 kubenswrapper[4430]: I1203 14:08:19.885187 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"749d4a97321672e94f0f4d6c55d7fa485dfbd3bbe5480f2c579faa82f311605b"} Dec 03 14:08:19.885296 master-0 kubenswrapper[4430]: I1203 14:08:19.885290 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"aa440bd50b25afd3bbdcd911eb6ddd2cb8d5f29270fc9664a389f142c4f8cf24"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885309 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"e4e74143105a836ab029b335e356e20dcf63f1dfd4df0559287d53a803dfe9b1"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885323 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"353ef5bad57ce46db98c0549f921ee8f0ee580567553f3ba9950d113638096f2"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885336 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"1676af95112121a9e343fac781d61b54d4f18bb5d03944dc4409d844ba4c9c5e"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885347 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerDied","Data":"cc112e6842d5a1677f57d5cb903a1e5d6f4646550a794d787fb3ec9cc8aeb9a3"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885359 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"f5aa2d6b41f5e21a89224256dc48af14","Type":"ContainerStarted","Data":"bab8669ea30872069bdc56319ed2c48f42499fe26751ac8d3ca0ede1a5bee36a"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885370 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885383 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885395 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885409 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885446 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885460 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"2bd5deb4c2095551f816b9cd7a952bdeb6888c958c7bf3b53ec320fdd7d14864"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885474 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"fe037448e5feda9fc9bbbf1bbf8674c101cb4b219513e0365a80e995633a17e6"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885489 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885506 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885518 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885531 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f"} Dec 03 14:08:19.885511 master-0 kubenswrapper[4430]: I1203 14:08:19.885548 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"80504ebd6ba988440c44eab507403c926594e98beb338ef28166557ac1fc6f8e"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885578 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885593 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"c98a8d85d3901d33f6fe192bdc7172aa","Type":"ContainerStarted","Data":"f92333341094b48a205a6c8a743b8dc6725c6e086df8f391d70bc2def01c4251"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885611 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885626 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885639 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885651 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885663 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885675 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885692 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885705 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerDied","Data":"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7"} Dec 03 14:08:19.887253 master-0 kubenswrapper[4430]: I1203 14:08:19.885719 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"ebf07eb54db570834b7c9a90b6b07403","Type":"ContainerStarted","Data":"ba22674ca1fdb432e95dbedffc0cfc3f159754eb6ccb515813a34a559f18d00e"} Dec 03 14:08:19.901453 master-0 kubenswrapper[4430]: E1203 14:08:19.901322 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.901453 master-0 kubenswrapper[4430]: E1203 14:08:19.901322 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.901453 master-0 kubenswrapper[4430]: E1203 14:08:19.901364 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:19.902098 master-0 kubenswrapper[4430]: E1203 14:08:19.902060 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.974851 master-0 kubenswrapper[4430]: I1203 14:08:19.974718 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.974851 master-0 kubenswrapper[4430]: I1203 14:08:19.974797 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.974851 master-0 kubenswrapper[4430]: I1203 14:08:19.974829 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:19.974851 master-0 kubenswrapper[4430]: I1203 14:08:19.974858 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.974851 master-0 kubenswrapper[4430]: I1203 14:08:19.974884 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.974907 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.974932 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.974957 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975123 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975208 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975242 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975306 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975331 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975466 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975606 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.975610 master-0 kubenswrapper[4430]: I1203 14:08:19.975629 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975650 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975670 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975690 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975723 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975750 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975791 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:19.976278 master-0 kubenswrapper[4430]: I1203 14:08:19.975835 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.002737 master-0 kubenswrapper[4430]: E1203 14:08:20.002654 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.003162 master-0 kubenswrapper[4430]: E1203 14:08:20.002952 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.077068 master-0 kubenswrapper[4430]: I1203 14:08:20.076900 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077101 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077092 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077255 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077338 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077346 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077516 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.077579 master-0 kubenswrapper[4430]: I1203 14:08:20.077538 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077617 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077480 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077622 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077671 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077718 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077767 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.077867 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078016 master-0 kubenswrapper[4430]: I1203 14:08:20.078009 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078041 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078070 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078130 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078162 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078179 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078248 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078289 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078303 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078348 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078356 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078399 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078411 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078469 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078455 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.078508 master-0 kubenswrapper[4430]: I1203 14:08:20.078513 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078567 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078600 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078643 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078677 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"etcd-master-0\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078698 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078715 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078735 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078750 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078769 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078783 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078805 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078840 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078892 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078899 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.079665 master-0 kubenswrapper[4430]: I1203 14:08:20.078813 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.525450 master-0 kubenswrapper[4430]: I1203 14:08:20.525331 4430 apiserver.go:52] "Watching apiserver" Dec 03 14:08:20.546592 master-0 kubenswrapper[4430]: I1203 14:08:20.545979 4430 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:08:20.549968 master-0 kubenswrapper[4430]: I1203 14:08:20.549795 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-kube-apiserver/installer-2-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-target-pcchm","openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql","openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr","openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz","openshift-service-ca/service-ca-6b8bb995f7-b68p8","openshift-console/downloads-6f5db8559b-96ljh","openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8","openshift-apiserver/apiserver-6985f84b49-v9vlg","openshift-etcd/installer-1-master-0","openshift-insights/insights-operator-59d99f9b7b-74sss","openshift-multus/multus-kk4tm","openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw","openshift-console/console-c5d7cd7f9-2hp75","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq","openshift-ingress/router-default-54f97f57-rr9px","openshift-kube-scheduler/installer-5-master-0","openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n","openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r","openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4","openshift-authentication/oauth-openshift-747bdb58b5-mn76f","openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j","openshift-multus/multus-admission-controller-5bdcc987c4-x99xc","openshift-network-node-identity/network-node-identity-c8csx","openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg","openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/installer-4-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-monitoring/node-exporter-b62gf","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-multus/network-metrics-daemon-ch7xd","openshift-etcd/etcd-master-0","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-marketplace/redhat-operators-6z4sc","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7","openshift-image-registry/node-ca-4p4zh","openshift-kube-scheduler/installer-4-master-0","openshift-marketplace/community-operators-7fwtv","openshift-monitoring/metrics-server-555496955b-vpcbs","openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv","openshift-cluster-node-tuning-operator/tuned-7zkbg","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/prometheus-operator-565bdcb8-477pk","openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-catalogd/catalogd-controller-manager-754cfd84-qf898","openshift-console/console-648d88c756-vswh8","openshift-controller-manager/controller-manager-78d987764b-xcs5w","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","kube-system/bootstrap-kube-controller-manager-master-0","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-marketplace/redhat-marketplace-ddwmn","openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k","openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl","openshift-ingress-canary/ingress-canary-vkpv4","openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg","openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-dns/dns-default-5m4f8","openshift-kube-controller-manager/installer-1-master-0","openshift-machine-api/machine-api-operator-7486ff55f-wcnxg","openshift-marketplace/certified-operators-t8rt7","openshift-monitoring/alertmanager-main-0","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l","assisted-installer/assisted-installer-controller-stq5g","openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w","openshift-console-operator/console-operator-77df56447c-vsrxx","openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5","openshift-machine-config-operator/machine-config-server-pvrfs","openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml","openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg","openshift-monitoring/prometheus-k8s-0","openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h","openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg","openshift-ovn-kubernetes/ovnkube-node-txl6b","openshift-monitoring/thanos-querier-cc996c4bd-j4hzr","openshift-machine-config-operator/machine-config-daemon-2ztl9","openshift-network-operator/iptables-alerter-n24qb","openshift-dns/node-resolver-4xlhs"] Dec 03 14:08:20.550504 master-0 kubenswrapper[4430]: I1203 14:08:20.550396 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 14:08:20.550663 master-0 kubenswrapper[4430]: I1203 14:08:20.550604 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:20.550808 master-0 kubenswrapper[4430]: I1203 14:08:20.550756 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:20.550988 master-0 kubenswrapper[4430]: I1203 14:08:20.550958 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.551042 master-0 kubenswrapper[4430]: I1203 14:08:20.551022 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.551298 master-0 kubenswrapper[4430]: E1203 14:08:20.551035 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:20.551298 master-0 kubenswrapper[4430]: E1203 14:08:20.551127 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:20.551298 master-0 kubenswrapper[4430]: E1203 14:08:20.551189 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:20.551447 master-0 kubenswrapper[4430]: E1203 14:08:20.551252 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:20.552590 master-0 kubenswrapper[4430]: I1203 14:08:20.551381 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.552745 master-0 kubenswrapper[4430]: E1203 14:08:20.552685 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:20.552745 master-0 kubenswrapper[4430]: I1203 14:08:20.551633 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:20.553861 master-0 kubenswrapper[4430]: E1203 14:08:20.552784 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:20.554331 master-0 kubenswrapper[4430]: I1203 14:08:20.554103 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.554398 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555022 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555179 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555170 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555226 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555289 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555335 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555333 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555361 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555407 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555416 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555494 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555523 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555519 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555539 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: I1203 14:08:20.555544 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555582 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555657 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:20.555657 master-0 kubenswrapper[4430]: E1203 14:08:20.555456 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:20.556644 master-0 kubenswrapper[4430]: I1203 14:08:20.555961 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:20.556644 master-0 kubenswrapper[4430]: I1203 14:08:20.555971 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: E1203 14:08:20.556939 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: E1203 14:08:20.557080 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.557626 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.557691 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: E1203 14:08:20.557807 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.557874 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.558380 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.560763 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: E1203 14:08:20.560958 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.561500 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.561627 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.562298 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: I1203 14:08:20.562371 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:08:20.562524 master-0 kubenswrapper[4430]: E1203 14:08:20.562465 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:20.563240 master-0 kubenswrapper[4430]: I1203 14:08:20.563022 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.565510 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.565550 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.565889 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.565902 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566285 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566454 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566672 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566785 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566929 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.566977 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.567024 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: E1203 14:08:20.567078 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.567500 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: E1203 14:08:20.567566 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.567641 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: E1203 14:08:20.567676 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.567780 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.567914 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: I1203 14:08:20.568009 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: E1203 14:08:20.568003 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:20.568086 master-0 kubenswrapper[4430]: E1203 14:08:20.568048 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:20.569029 master-0 kubenswrapper[4430]: I1203 14:08:20.568650 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.569029 master-0 kubenswrapper[4430]: E1203 14:08:20.568714 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:20.569029 master-0 kubenswrapper[4430]: I1203 14:08:20.568967 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:08:20.569029 master-0 kubenswrapper[4430]: I1203 14:08:20.568995 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: I1203 14:08:20.569182 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: I1203 14:08:20.569287 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: E1203 14:08:20.569435 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: I1203 14:08:20.569455 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: I1203 14:08:20.569527 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 14:08:20.569998 master-0 kubenswrapper[4430]: I1203 14:08:20.569817 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:08:20.570900 master-0 kubenswrapper[4430]: I1203 14:08:20.570709 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 14:08:20.570900 master-0 kubenswrapper[4430]: I1203 14:08:20.570775 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.570900 master-0 kubenswrapper[4430]: E1203 14:08:20.570875 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:20.571372 master-0 kubenswrapper[4430]: I1203 14:08:20.570988 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 14:08:20.571372 master-0 kubenswrapper[4430]: I1203 14:08:20.571018 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:20.571372 master-0 kubenswrapper[4430]: E1203 14:08:20.571143 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:20.571650 master-0 kubenswrapper[4430]: I1203 14:08:20.571592 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:08:20.571712 master-0 kubenswrapper[4430]: I1203 14:08:20.571640 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:20.571781 master-0 kubenswrapper[4430]: E1203 14:08:20.571751 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:20.571827 master-0 kubenswrapper[4430]: I1203 14:08:20.571797 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 14:08:20.572313 master-0 kubenswrapper[4430]: I1203 14:08:20.572278 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:08:20.572592 master-0 kubenswrapper[4430]: I1203 14:08:20.572402 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:20.572666 master-0 kubenswrapper[4430]: E1203 14:08:20.572626 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:20.572818 master-0 kubenswrapper[4430]: I1203 14:08:20.572783 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:20.572873 master-0 kubenswrapper[4430]: E1203 14:08:20.572851 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:20.572922 master-0 kubenswrapper[4430]: I1203 14:08:20.572874 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.572964 master-0 kubenswrapper[4430]: E1203 14:08:20.572929 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:20.572964 master-0 kubenswrapper[4430]: I1203 14:08:20.572937 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:20.573061 master-0 kubenswrapper[4430]: E1203 14:08:20.572989 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:20.573446 master-0 kubenswrapper[4430]: I1203 14:08:20.573310 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:20.573714 master-0 kubenswrapper[4430]: I1203 14:08:20.573656 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:08:20.573775 master-0 kubenswrapper[4430]: I1203 14:08:20.573723 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:08:20.573860 master-0 kubenswrapper[4430]: I1203 14:08:20.573682 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:08:20.573916 master-0 kubenswrapper[4430]: I1203 14:08:20.573867 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:08:20.573982 master-0 kubenswrapper[4430]: I1203 14:08:20.573951 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:08:20.574158 master-0 kubenswrapper[4430]: I1203 14:08:20.574125 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:08:20.574537 master-0 kubenswrapper[4430]: E1203 14:08:20.574441 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:20.574537 master-0 kubenswrapper[4430]: I1203 14:08:20.574505 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.574668 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575330 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575387 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575570 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575657 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575697 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575821 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.575866 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.575995 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.576042 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.576074 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.576094 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.576234 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.576272 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.576517 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.576569 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.576766 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: E1203 14:08:20.576819 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:20.577166 master-0 kubenswrapper[4430]: I1203 14:08:20.576967 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:08:20.579181 master-0 kubenswrapper[4430]: I1203 14:08:20.579155 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:08:20.579941 master-0 kubenswrapper[4430]: I1203 14:08:20.579783 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.579941 master-0 kubenswrapper[4430]: E1203 14:08:20.579886 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:20.580887 master-0 kubenswrapper[4430]: I1203 14:08:20.580385 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:20.580887 master-0 kubenswrapper[4430]: E1203 14:08:20.580505 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:20.580887 master-0 kubenswrapper[4430]: I1203 14:08:20.580588 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:08:20.580887 master-0 kubenswrapper[4430]: I1203 14:08:20.580604 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:08:20.581110 master-0 kubenswrapper[4430]: I1203 14:08:20.580890 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:08:20.581110 master-0 kubenswrapper[4430]: I1203 14:08:20.580969 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.581110 master-0 kubenswrapper[4430]: I1203 14:08:20.581028 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.581110 master-0 kubenswrapper[4430]: E1203 14:08:20.581037 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:20.581110 master-0 kubenswrapper[4430]: E1203 14:08:20.581081 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:20.584032 master-0 kubenswrapper[4430]: I1203 14:08:20.583983 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:08:20.584931 master-0 kubenswrapper[4430]: I1203 14:08:20.584216 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:08:20.584931 master-0 kubenswrapper[4430]: I1203 14:08:20.584552 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:08:20.584931 master-0 kubenswrapper[4430]: I1203 14:08:20.584623 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.584931 master-0 kubenswrapper[4430]: E1203 14:08:20.584681 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:20.584931 master-0 kubenswrapper[4430]: I1203 14:08:20.584723 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:08:20.585202 master-0 kubenswrapper[4430]: I1203 14:08:20.584958 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:20.585202 master-0 kubenswrapper[4430]: E1203 14:08:20.585017 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:20.585202 master-0 kubenswrapper[4430]: I1203 14:08:20.585051 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:20.585202 master-0 kubenswrapper[4430]: I1203 14:08:20.585186 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:08:20.585506 master-0 kubenswrapper[4430]: E1203 14:08:20.585196 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:20.585506 master-0 kubenswrapper[4430]: I1203 14:08:20.585248 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.585767 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.585865 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.585878 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.585927 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.585952 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.586032 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.586099 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.586145 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: I1203 14:08:20.586181 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.586307 master-0 kubenswrapper[4430]: E1203 14:08:20.586236 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:20.590093 master-0 kubenswrapper[4430]: I1203 14:08:20.588292 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.590093 master-0 kubenswrapper[4430]: I1203 14:08:20.588312 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:20.590093 master-0 kubenswrapper[4430]: E1203 14:08:20.588386 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:20.590093 master-0 kubenswrapper[4430]: E1203 14:08:20.588479 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:20.590093 master-0 kubenswrapper[4430]: I1203 14:08:20.589140 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:08:20.593568 master-0 kubenswrapper[4430]: I1203 14:08:20.591978 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.593568 master-0 kubenswrapper[4430]: E1203 14:08:20.592093 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:20.593568 master-0 kubenswrapper[4430]: I1203 14:08:20.593455 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:08:20.596784 master-0 kubenswrapper[4430]: I1203 14:08:20.594121 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:08:20.596784 master-0 kubenswrapper[4430]: I1203 14:08:20.594519 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:08:20.596784 master-0 kubenswrapper[4430]: I1203 14:08:20.594927 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:08:20.596784 master-0 kubenswrapper[4430]: I1203 14:08:20.596465 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:08:20.596784 master-0 kubenswrapper[4430]: I1203 14:08:20.596749 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:08:20.598227 master-0 kubenswrapper[4430]: I1203 14:08:20.598055 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.598596 master-0 kubenswrapper[4430]: E1203 14:08:20.598247 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:20.600943 master-0 kubenswrapper[4430]: I1203 14:08:20.600804 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.601055 master-0 kubenswrapper[4430]: E1203 14:08:20.601024 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:20.601117 master-0 kubenswrapper[4430]: I1203 14:08:20.601077 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.601332 master-0 kubenswrapper[4430]: E1203 14:08:20.601296 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:20.601487 master-0 kubenswrapper[4430]: I1203 14:08:20.601433 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.601645 master-0 kubenswrapper[4430]: E1203 14:08:20.601602 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:20.601835 master-0 kubenswrapper[4430]: I1203 14:08:20.601811 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:20.601914 master-0 kubenswrapper[4430]: E1203 14:08:20.601871 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:20.602370 master-0 kubenswrapper[4430]: I1203 14:08:20.602329 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.602576 master-0 kubenswrapper[4430]: E1203 14:08:20.602546 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:20.602673 master-0 kubenswrapper[4430]: I1203 14:08:20.602591 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.602796 master-0 kubenswrapper[4430]: E1203 14:08:20.602781 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:20.602988 master-0 kubenswrapper[4430]: I1203 14:08:20.602947 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.603045 master-0 kubenswrapper[4430]: E1203 14:08:20.602999 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:20.603345 master-0 kubenswrapper[4430]: I1203 14:08:20.603304 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.603466 master-0 kubenswrapper[4430]: E1203 14:08:20.603432 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:20.603636 master-0 kubenswrapper[4430]: I1203 14:08:20.603599 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.604561 master-0 kubenswrapper[4430]: I1203 14:08:20.604490 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.604561 master-0 kubenswrapper[4430]: I1203 14:08:20.604554 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:20.604670 master-0 kubenswrapper[4430]: E1203 14:08:20.604643 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:20.604724 master-0 kubenswrapper[4430]: E1203 14:08:20.604672 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:20.606080 master-0 kubenswrapper[4430]: I1203 14:08:20.606041 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:08:20.607342 master-0 kubenswrapper[4430]: I1203 14:08:20.607275 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:08:20.607594 master-0 kubenswrapper[4430]: I1203 14:08:20.607534 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:08:20.608132 master-0 kubenswrapper[4430]: I1203 14:08:20.608110 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:08:20.641436 master-0 kubenswrapper[4430]: I1203 14:08:20.641329 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.648956 master-0 kubenswrapper[4430]: I1203 14:08:20.648905 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.650618 master-0 kubenswrapper[4430]: I1203 14:08:20.650577 4430 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 14:08:20.685471 master-0 kubenswrapper[4430]: I1203 14:08:20.685363 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.685471 master-0 kubenswrapper[4430]: I1203 14:08:20.685470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685506 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685530 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685556 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685578 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685603 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685628 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685656 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: I1203 14:08:20.685690 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: E1203 14:08:20.685707 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: E1203 14:08:20.685745 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: E1203 14:08:20.685756 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: E1203 14:08:20.685812 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.185784094 +0000 UTC m=+1.808698170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:20.685822 master-0 kubenswrapper[4430]: E1203 14:08:20.685831 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.185824155 +0000 UTC m=+1.808738231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685843 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.185837785 +0000 UTC m=+1.808751861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.685711 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685880 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685883 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.685963 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685977 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.685877 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685911 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.685921 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.185906317 +0000 UTC m=+1.808820393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.686208 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.186194375 +0000 UTC m=+1.809108461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: E1203 14:08:20.686226 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.186217226 +0000 UTC m=+1.809131302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.686248 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.686280 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.686328 master-0 kubenswrapper[4430]: I1203 14:08:20.686328 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: E1203 14:08:20.686329 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.186284358 +0000 UTC m=+1.809198604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686370 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686391 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686412 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686472 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686491 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686511 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686535 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686548 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686557 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686584 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686607 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686615 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686626 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686653 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686675 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686693 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686713 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686731 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: E1203 14:08:20.686751 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686758 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: E1203 14:08:20.686787 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.186776232 +0000 UTC m=+1.809690318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686806 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686839 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686867 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: I1203 14:08:20.686895 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:20.686939 master-0 kubenswrapper[4430]: E1203 14:08:20.686926 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.686960 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.686968 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.186958117 +0000 UTC m=+1.809872373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.686925 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687012 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687028 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687038 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687044 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687097 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687129 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687135 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687149 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687154 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687180 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187158763 +0000 UTC m=+1.810072839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687203 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687212 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687229 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687247 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187236805 +0000 UTC m=+1.810151101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687265 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687272 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687293 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687300 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187291587 +0000 UTC m=+1.810205663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687324 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687333 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687339 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687362 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687357 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187347058 +0000 UTC m=+1.810261354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687400 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687409 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687443 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687450 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687456 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187448021 +0000 UTC m=+1.810362097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687481 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687509 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687513 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687541 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187532743 +0000 UTC m=+1.810446809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687569 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687584 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187572135 +0000 UTC m=+1.810486301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687607 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187599615 +0000 UTC m=+1.810513791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687626 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187617196 +0000 UTC m=+1.810531372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687658 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687693 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687695 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687780 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687806 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687808 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687753 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687882 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687542 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687849 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187836272 +0000 UTC m=+1.810750438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687929 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687937 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687956 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687965 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187953605 +0000 UTC m=+1.810867681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687966 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.687985 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187978086 +0000 UTC m=+1.810892162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.687982 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688003 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.187995937 +0000 UTC m=+1.810910013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688017 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188009547 +0000 UTC m=+1.810923723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688034 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188025367 +0000 UTC m=+1.810939443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688034 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688054 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688054 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688062 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188053648 +0000 UTC m=+1.810967724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688094 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188085139 +0000 UTC m=+1.810999215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688111 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688137 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688162 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688187 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188175272 +0000 UTC m=+1.811089348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688210 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688217 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688230 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688235 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688248 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188239514 +0000 UTC m=+1.811153680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688263 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188255494 +0000 UTC m=+1.811169570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688281 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688298 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688307 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688323 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188314236 +0000 UTC m=+1.811228412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688286 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688340 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688365 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188355767 +0000 UTC m=+1.811269943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688381 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688407 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188398888 +0000 UTC m=+1.811313064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688426 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688410 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688466 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.18845555 +0000 UTC m=+1.811369696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688516 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688532 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688541 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688520 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688612 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188603824 +0000 UTC m=+1.811518020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688584 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688637 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688645 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688670 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688562 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688704 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688706 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188698297 +0000 UTC m=+1.811612363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688744 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688770 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688801 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688828 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688853 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.188835821 +0000 UTC m=+1.811749897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688882 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688941 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688985 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.688994 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689029 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689087 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689164 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689187 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689198 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.689215 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689229 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689255 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.689327 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.189249642 +0000 UTC m=+1.812163808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.688726 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: E1203 14:08:20.689821 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.189778157 +0000 UTC m=+1.812692243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.689859 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.690048 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.690101 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.690105 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.693080 master-0 kubenswrapper[4430]: I1203 14:08:20.690133 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690256 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690293 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690390 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690464 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690511 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690515 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690568 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.190553529 +0000 UTC m=+1.813467805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690597 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690626 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690652 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690674 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690693 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690711 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690735 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690755 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690780 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690783 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690803 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690820 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.190809127 +0000 UTC m=+1.813723203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690844 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690883 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690886 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690917 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690934 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690947 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690965 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.190952461 +0000 UTC m=+1.813866537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.690978 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691016 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691073 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.191036663 +0000 UTC m=+1.813950739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691079 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691092 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.191084735 +0000 UTC m=+1.813998811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691134 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.191126136 +0000 UTC m=+1.814040212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.690983 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691167 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691227 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.191214078 +0000 UTC m=+1.814128154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691257 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691282 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691302 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691321 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691341 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691363 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691394 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691447 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691479 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691504 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691530 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691554 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691564 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691578 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691610 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.691722 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691814 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.691871 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.191839256 +0000 UTC m=+1.814753402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.692158 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.692279 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.692294 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.692306 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.692389 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.192376271 +0000 UTC m=+1.815290347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.692836 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.693012 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.695478 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.694995 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.695544 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.693100 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695602 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.195587293 +0000 UTC m=+1.818501369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695206 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.695628 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695395 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.695643 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695674 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.195667955 +0000 UTC m=+1.818582031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695411 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695496 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695492 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695743 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695771 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695782 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695605 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.695693 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.195682555 +0000 UTC m=+1.818596631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.696585 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696604 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.696646 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.196633252 +0000 UTC m=+1.819547538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696681 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696719 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696748 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696777 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696791 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696806 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696860 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.696885 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696901 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.696934 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.196923091 +0000 UTC m=+1.819837167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696962 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.696996 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697025 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697057 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697087 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697119 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697163 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697201 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697254 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697272 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697358 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697377 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697393 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697397 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697405 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697497 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697497 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697546 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.697561 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697717 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.697832 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698089 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.197212199 +0000 UTC m=+1.820126435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698131 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198117995 +0000 UTC m=+1.821032261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698157 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198147026 +0000 UTC m=+1.821061342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698183 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198173376 +0000 UTC m=+1.821087692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698209 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198198997 +0000 UTC m=+1.821113293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.698257 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.698293 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698312 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.698327 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698360 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198348441 +0000 UTC m=+1.821262517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698405 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.698440 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698487 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: E1203 14:08:20.698492 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.198463365 +0000 UTC m=+1.821377631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:20.699244 master-0 kubenswrapper[4430]: I1203 14:08:20.698561 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698607 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698633 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698668 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698698 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698730 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698758 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698816 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698842 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698880 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698909 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698938 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698964 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.698991 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699019 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699078 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699109 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699135 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699162 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699188 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699243 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699272 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699303 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699330 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699356 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699382 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699408 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699456 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699489 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699516 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699542 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699570 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699577 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699646 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699663 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699676 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199657819 +0000 UTC m=+1.822571895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699698 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199689729 +0000 UTC m=+1.822603805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699662 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.19971502 +0000 UTC m=+1.822629096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699737 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699744 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199733481 +0000 UTC m=+1.822647557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699809 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199794592 +0000 UTC m=+1.822708888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699820 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699826 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199817023 +0000 UTC m=+1.822731109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699827 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699852 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199841274 +0000 UTC m=+1.822755580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699910 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199885425 +0000 UTC m=+1.822799711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699929 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199919576 +0000 UTC m=+1.822833862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699958 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199949477 +0000 UTC m=+1.822863773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699981 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199972318 +0000 UTC m=+1.822886604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.699997 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.199990658 +0000 UTC m=+1.822904954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700015 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200009329 +0000 UTC m=+1.822923405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700033 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200024059 +0000 UTC m=+1.822938385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700053 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20004405 +0000 UTC m=+1.822958356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700071 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20006466 +0000 UTC m=+1.822978736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700078 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700089 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200082161 +0000 UTC m=+1.822996457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700137 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200127952 +0000 UTC m=+1.823042028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700170 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700194 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700256 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700033 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700314 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200306487 +0000 UTC m=+1.823220563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700356 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700487 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.699601 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700593 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700616 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700639 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700639 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700675 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.200667197 +0000 UTC m=+1.823581273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700667 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700713 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700731 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700779 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700783 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700839 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700853 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.700894 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700905 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.701298 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701309 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201293395 +0000 UTC m=+1.824207631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.700939 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.701406 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701494 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201362697 +0000 UTC m=+1.824276973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701551 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201528242 +0000 UTC m=+1.824442358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701585 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201572043 +0000 UTC m=+1.824486149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.701629 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701766 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201728338 +0000 UTC m=+1.824642564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701900 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701953 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.201941924 +0000 UTC m=+1.824856000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.701978 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702060 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.202050467 +0000 UTC m=+1.824964553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702200 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702291 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702357 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702472 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702509 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702546 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702587 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702621 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702674 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702697 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702706 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.202655194 +0000 UTC m=+1.825569270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702765 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.702624 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702873 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702920 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.202900551 +0000 UTC m=+1.825814637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702957 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.202947272 +0000 UTC m=+1.825861548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.702978 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.202969533 +0000 UTC m=+1.825883819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.703011 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.703082 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.703098 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: I1203 14:08:20.703117 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.703146 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.203132457 +0000 UTC m=+1.826046543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:20.705545 master-0 kubenswrapper[4430]: E1203 14:08:20.703170 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703171 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703211 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703220 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20321185 +0000 UTC m=+1.826125936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703242 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703271 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703296 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703322 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703353 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703381 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703408 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703464 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703490 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703517 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703547 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703575 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703605 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703637 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703668 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703695 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703724 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703738 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703776 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703746 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703799 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703839 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.203828257 +0000 UTC m=+1.826742333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703851 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703854 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703893 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703942 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20393293 +0000 UTC m=+1.826847006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703972 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.703980 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.703996 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704008 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204000372 +0000 UTC m=+1.826914448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704020 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704045 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704051 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204044333 +0000 UTC m=+1.826958409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704069 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204061254 +0000 UTC m=+1.826975330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704072 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704087 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704092 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204085835 +0000 UTC m=+1.826999911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704099 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704121 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704106 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204100505 +0000 UTC m=+1.827014581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704159 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204154147 +0000 UTC m=+1.827068223 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704159 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704200 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204194058 +0000 UTC m=+1.827108134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704201 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704214 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204208688 +0000 UTC m=+1.827122764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704235 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704267 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704298 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704331 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704363 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704399 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704466 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704493 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704499 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704506 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704554 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704648 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704663 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704704 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204687352 +0000 UTC m=+1.827601468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704712 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704727 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704730 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204718283 +0000 UTC m=+1.827632389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704758 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204749503 +0000 UTC m=+1.827663579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704773 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704775 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704813 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704828 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704843 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704932 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.204924388 +0000 UTC m=+1.827838464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.704968 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20496187 +0000 UTC m=+1.827875946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.704994 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705133 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705154 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705170 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705248 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705270 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705290 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.705987 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706032 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706055 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706056 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706079 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706127 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706191 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706268 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706343 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206330089 +0000 UTC m=+1.829244295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706397 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20639042 +0000 UTC m=+1.829304496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706461 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706514 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706539 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206533214 +0000 UTC m=+1.829447290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706539 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706568 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706592 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706647 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706647 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706675 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206668748 +0000 UTC m=+1.829582824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706701 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706730 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706761 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706794 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706811 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706830 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206821292 +0000 UTC m=+1.829735368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706853 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706861 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706882 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706886 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206880664 +0000 UTC m=+1.829794740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706913 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706937 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706942 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706957 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706963 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.706984 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706958 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.707003 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.206992787 +0000 UTC m=+1.829907053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: I1203 14:08:20.706980 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.707085 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.207022328 +0000 UTC m=+1.829936574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:20.711270 master-0 kubenswrapper[4430]: E1203 14:08:20.707128 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.207113411 +0000 UTC m=+1.830027537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707182 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707234 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707285 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707478 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707632 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707691 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.707725 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707737 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.707758 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.207750289 +0000 UTC m=+1.830664365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707782 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707825 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707875 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707934 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.707979 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708018 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708052 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708073 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708111 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.208094669 +0000 UTC m=+1.831008795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708144 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708185 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708209 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708215 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708234 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708274 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708293 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708298 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708352 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.208334595 +0000 UTC m=+1.831248862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708378 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708390 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708406 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708455 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.208414788 +0000 UTC m=+1.831328864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708484 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708519 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708539 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708555 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708554 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708554 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708579 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.208568922 +0000 UTC m=+1.831483078 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708627 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708661 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708674 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708677 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708702 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.708719 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.208709226 +0000 UTC m=+1.831623302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708788 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708807 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.708979 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709026 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709082 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709125 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709168 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709210 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709258 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709288 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709343 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709357 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709379 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709401 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.209387235 +0000 UTC m=+1.832301331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709462 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709498 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709509 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709127 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709549 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20953717 +0000 UTC m=+1.832451396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709561 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709573 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20956117 +0000 UTC m=+1.832475416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709604 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.209595841 +0000 UTC m=+1.832509917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709647 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709686 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709720 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709751 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709779 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709804 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709834 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709805 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709862 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709867 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709891 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20988221 +0000 UTC m=+1.832796286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709844 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709915 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.20990359 +0000 UTC m=+1.832817896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709921 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.709960 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709966 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.209955642 +0000 UTC m=+1.832869738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.709978 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710016 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210004253 +0000 UTC m=+1.832918549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710041 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710077 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710081 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210067435 +0000 UTC m=+1.832981511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710116 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210107746 +0000 UTC m=+1.833021822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710113 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710128 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710145 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710151 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710178 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210167718 +0000 UTC m=+1.833082014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710204 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710246 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710305 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710309 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710325 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710329 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710350 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210338143 +0000 UTC m=+1.833252229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710352 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710377 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710399 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210388854 +0000 UTC m=+1.833302930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710402 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710456 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710461 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210450096 +0000 UTC m=+1.833364362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710505 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710533 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710580 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710603 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710614 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710626 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710650 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710685 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710696 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710733 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710738 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210729694 +0000 UTC m=+1.833643770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710820 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: I1203 14:08:20.710826 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710778 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:20.720913 master-0 kubenswrapper[4430]: E1203 14:08:20.710853 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210843117 +0000 UTC m=+1.833757183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.710871 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.710874 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.710877 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.210866468 +0000 UTC m=+1.833780544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.710915 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.710941 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.21093573 +0000 UTC m=+1.833849796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711013 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711124 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711161 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711183 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711195 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711211 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211204287 +0000 UTC m=+1.834118363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711232 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711223 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711259 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711264 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711309 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.21129485 +0000 UTC m=+1.834209116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711335 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711362 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711380 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211369092 +0000 UTC m=+1.834283178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711334 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711408 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211396893 +0000 UTC m=+1.834310969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711455 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711491 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711516 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711541 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711567 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711588 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711604 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711655 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711667 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.21165381 +0000 UTC m=+1.834567886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711683 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211675961 +0000 UTC m=+1.834590037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711612 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711734 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711772 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711822 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711841 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711858 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711863 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711907 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711915 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.711914 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211901227 +0000 UTC m=+1.834815483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.711975 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712000 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.211980869 +0000 UTC m=+1.834894975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712039 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712051 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712078 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712088 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212074012 +0000 UTC m=+1.834988088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712134 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712137 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712172 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712234 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712265 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212253567 +0000 UTC m=+1.835167953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712307 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712321 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712342 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712356 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.21234865 +0000 UTC m=+1.835262726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712374 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712411 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712455 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712462 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712489 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712488 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712513 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212501144 +0000 UTC m=+1.835415240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712529 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712558 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712566 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212557086 +0000 UTC m=+1.835471162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712564 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712583 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712607 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712617 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712630 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712670 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712675 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712681 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212666199 +0000 UTC m=+1.835580465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712681 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712704 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.21269696 +0000 UTC m=+1.835611036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712726 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712744 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212728881 +0000 UTC m=+1.835642987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712778 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712825 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712853 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212839714 +0000 UTC m=+1.835754020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712903 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.712910 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.712992 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.212975178 +0000 UTC m=+1.835889284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713024 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713035 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713087 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713138 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713148 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713176 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.213166683 +0000 UTC m=+1.836080969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713203 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713246 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713271 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713294 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.213281866 +0000 UTC m=+1.836195932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713314 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713336 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713408 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713464 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.213453491 +0000 UTC m=+1.836367567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713505 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: E1203 14:08:20.713579 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.213542714 +0000 UTC m=+1.836456980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713640 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.728971 master-0 kubenswrapper[4430]: I1203 14:08:20.713640 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.815738 master-0 kubenswrapper[4430]: I1203 14:08:20.815561 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.815987 master-0 kubenswrapper[4430]: I1203 14:08:20.815721 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.815987 master-0 kubenswrapper[4430]: I1203 14:08:20.815881 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.816241 master-0 kubenswrapper[4430]: I1203 14:08:20.816203 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.816295 master-0 kubenswrapper[4430]: I1203 14:08:20.816255 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.816337 master-0 kubenswrapper[4430]: I1203 14:08:20.816294 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.816337 master-0 kubenswrapper[4430]: I1203 14:08:20.816326 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.816446 master-0 kubenswrapper[4430]: I1203 14:08:20.816378 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.816497 master-0 kubenswrapper[4430]: I1203 14:08:20.816474 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.816538 master-0 kubenswrapper[4430]: I1203 14:08:20.816503 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.816593 master-0 kubenswrapper[4430]: I1203 14:08:20.816550 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.816593 master-0 kubenswrapper[4430]: I1203 14:08:20.815999 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.816682 master-0 kubenswrapper[4430]: I1203 14:08:20.816576 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.816682 master-0 kubenswrapper[4430]: I1203 14:08:20.816606 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.816816 master-0 kubenswrapper[4430]: I1203 14:08:20.816786 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.817001 master-0 kubenswrapper[4430]: I1203 14:08:20.816973 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.817057 master-0 kubenswrapper[4430]: I1203 14:08:20.817025 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.817102 master-0 kubenswrapper[4430]: I1203 14:08:20.817062 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.817140 master-0 kubenswrapper[4430]: I1203 14:08:20.817109 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.817179 master-0 kubenswrapper[4430]: I1203 14:08:20.817139 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.817179 master-0 kubenswrapper[4430]: I1203 14:08:20.817167 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.817251 master-0 kubenswrapper[4430]: I1203 14:08:20.817193 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.817302 master-0 kubenswrapper[4430]: I1203 14:08:20.817288 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.817622 master-0 kubenswrapper[4430]: I1203 14:08:20.817470 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.817622 master-0 kubenswrapper[4430]: I1203 14:08:20.817513 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.817622 master-0 kubenswrapper[4430]: I1203 14:08:20.817521 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.817622 master-0 kubenswrapper[4430]: I1203 14:08:20.817542 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:20.817622 master-0 kubenswrapper[4430]: I1203 14:08:20.817571 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817631 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817681 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817701 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817739 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817716 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817748 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.817863 master-0 kubenswrapper[4430]: I1203 14:08:20.817819 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.818114 master-0 kubenswrapper[4430]: I1203 14:08:20.817834 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:20.818114 master-0 kubenswrapper[4430]: I1203 14:08:20.817850 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.818114 master-0 kubenswrapper[4430]: I1203 14:08:20.817994 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.818114 master-0 kubenswrapper[4430]: I1203 14:08:20.817964 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:20.818114 master-0 kubenswrapper[4430]: I1203 14:08:20.818091 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.817863 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818361 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818368 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818408 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818497 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818550 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818571 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818602 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818607 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818641 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818667 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818728 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818751 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818780 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818756 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818810 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.818844 master-0 kubenswrapper[4430]: I1203 14:08:20.818846 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.818986 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819003 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819061 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819097 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819193 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819252 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819293 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819319 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819348 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819439 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819527 master-0 kubenswrapper[4430]: I1203 14:08:20.819480 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.819914 master-0 kubenswrapper[4430]: I1203 14:08:20.819557 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819914 master-0 kubenswrapper[4430]: I1203 14:08:20.819574 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.819914 master-0 kubenswrapper[4430]: I1203 14:08:20.819616 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.819914 master-0 kubenswrapper[4430]: I1203 14:08:20.819677 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:20.819914 master-0 kubenswrapper[4430]: I1203 14:08:20.819897 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.820090 master-0 kubenswrapper[4430]: I1203 14:08:20.820021 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:20.820090 master-0 kubenswrapper[4430]: I1203 14:08:20.820035 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:20.820090 master-0 kubenswrapper[4430]: I1203 14:08:20.820060 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.820206 master-0 kubenswrapper[4430]: I1203 14:08:20.820134 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.820354 master-0 kubenswrapper[4430]: I1203 14:08:20.820325 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.820458 master-0 kubenswrapper[4430]: I1203 14:08:20.820351 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820458 master-0 kubenswrapper[4430]: I1203 14:08:20.820397 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820458 master-0 kubenswrapper[4430]: I1203 14:08:20.820404 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.820576 master-0 kubenswrapper[4430]: I1203 14:08:20.820531 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820576 master-0 kubenswrapper[4430]: I1203 14:08:20.820547 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820576 master-0 kubenswrapper[4430]: I1203 14:08:20.820560 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820663 master-0 kubenswrapper[4430]: I1203 14:08:20.820613 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820743 master-0 kubenswrapper[4430]: I1203 14:08:20.820713 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820802 master-0 kubenswrapper[4430]: I1203 14:08:20.820755 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.820802 master-0 kubenswrapper[4430]: I1203 14:08:20.820781 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.820864 master-0 kubenswrapper[4430]: I1203 14:08:20.820833 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.825829 master-0 kubenswrapper[4430]: I1203 14:08:20.820834 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.825829 master-0 kubenswrapper[4430]: I1203 14:08:20.821841 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.827053 master-0 kubenswrapper[4430]: I1203 14:08:20.826936 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.828129 master-0 kubenswrapper[4430]: I1203 14:08:20.828062 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.828190 master-0 kubenswrapper[4430]: I1203 14:08:20.828138 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828307 master-0 kubenswrapper[4430]: I1203 14:08:20.828276 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828371 master-0 kubenswrapper[4430]: I1203 14:08:20.828317 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828516 master-0 kubenswrapper[4430]: I1203 14:08:20.828414 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828587 master-0 kubenswrapper[4430]: I1203 14:08:20.828373 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828587 master-0 kubenswrapper[4430]: I1203 14:08:20.828532 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.828587 master-0 kubenswrapper[4430]: I1203 14:08:20.828490 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:20.828810 master-0 kubenswrapper[4430]: I1203 14:08:20.828599 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828810 master-0 kubenswrapper[4430]: I1203 14:08:20.828638 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828810 master-0 kubenswrapper[4430]: I1203 14:08:20.828690 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.828810 master-0 kubenswrapper[4430]: I1203 14:08:20.828649 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.828958 master-0 kubenswrapper[4430]: I1203 14:08:20.828898 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.828958 master-0 kubenswrapper[4430]: I1203 14:08:20.828927 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829061 master-0 kubenswrapper[4430]: I1203 14:08:20.828954 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829061 master-0 kubenswrapper[4430]: I1203 14:08:20.828971 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.829061 master-0 kubenswrapper[4430]: I1203 14:08:20.828999 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.829061 master-0 kubenswrapper[4430]: I1203 14:08:20.829012 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829061 master-0 kubenswrapper[4430]: I1203 14:08:20.829044 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.829237 master-0 kubenswrapper[4430]: I1203 14:08:20.829101 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:20.829237 master-0 kubenswrapper[4430]: I1203 14:08:20.829167 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829314 master-0 kubenswrapper[4430]: I1203 14:08:20.829254 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.829432 master-0 kubenswrapper[4430]: I1203 14:08:20.829366 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829492 master-0 kubenswrapper[4430]: I1203 14:08:20.829453 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829492 master-0 kubenswrapper[4430]: I1203 14:08:20.829391 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.829595 master-0 kubenswrapper[4430]: I1203 14:08:20.829563 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.829686 master-0 kubenswrapper[4430]: I1203 14:08:20.829669 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.830035 master-0 kubenswrapper[4430]: I1203 14:08:20.829991 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.830035 master-0 kubenswrapper[4430]: I1203 14:08:20.830029 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:20.830131 master-0 kubenswrapper[4430]: I1203 14:08:20.830063 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830177 master-0 kubenswrapper[4430]: I1203 14:08:20.830146 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830217 master-0 kubenswrapper[4430]: I1203 14:08:20.830192 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830436 master-0 kubenswrapper[4430]: I1203 14:08:20.830247 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830436 master-0 kubenswrapper[4430]: I1203 14:08:20.830389 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830568 master-0 kubenswrapper[4430]: I1203 14:08:20.830528 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.830711 master-0 kubenswrapper[4430]: I1203 14:08:20.830675 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.830779 master-0 kubenswrapper[4430]: I1203 14:08:20.830762 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:20.830815 master-0 kubenswrapper[4430]: I1203 14:08:20.830766 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.830844 master-0 kubenswrapper[4430]: I1203 14:08:20.830829 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.855290 master-0 kubenswrapper[4430]: E1203 14:08:20.855162 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:08:20.855922 master-0 kubenswrapper[4430]: E1203 14:08:20.855777 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:20.858550 master-0 kubenswrapper[4430]: E1203 14:08:20.857371 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:20.858550 master-0 kubenswrapper[4430]: E1203 14:08:20.857595 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:20.858550 master-0 kubenswrapper[4430]: E1203 14:08:20.857612 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:20.858550 master-0 kubenswrapper[4430]: E1203 14:08:20.857727 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877556 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877615 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877638 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877654 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877695 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877713 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877728 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.377696936 +0000 UTC m=+2.000611022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.877790 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.377764358 +0000 UTC m=+2.000678434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: I1203 14:08:20.878301 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878527 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878549 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878565 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878629 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.378605432 +0000 UTC m=+2.001519508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878720 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878732 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878746 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.878770 master-0 kubenswrapper[4430]: E1203 14:08:20.878776 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878858 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878889 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878907 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878870 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.878882 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879026 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878796 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879166 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.878954 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.378940562 +0000 UTC m=+2.001854838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.879223 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879260 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.37923352 +0000 UTC m=+2.002147596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879193 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879341 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.379330223 +0000 UTC m=+2.002244299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.879407 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.879496 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.379487027 +0000 UTC m=+2.002401103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.880054 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.880203 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: I1203 14:08:20.880531 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.880587 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.880608 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.885334 master-0 kubenswrapper[4430]: E1203 14:08:20.880655 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.38063776 +0000 UTC m=+2.003551836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895235 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895278 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895296 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895377 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.395348849 +0000 UTC m=+2.018262925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895703 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895756 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895779 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.895865 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.395838523 +0000 UTC m=+2.018752599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: I1203 14:08:20.896242 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: I1203 14:08:20.896330 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.896545 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.896571 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.896591 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.896699 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.396664856 +0000 UTC m=+2.019578932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: I1203 14:08:20.896903 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: I1203 14:08:20.896942 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.897024 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.897042 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.898474 master-0 kubenswrapper[4430]: E1203 14:08:20.897053 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.899324 master-0 kubenswrapper[4430]: E1203 14:08:20.898734 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.398700204 +0000 UTC m=+2.021614280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: I1203 14:08:20.901466 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901550 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901565 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901576 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901628 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.401612767 +0000 UTC m=+2.024527023 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: I1203 14:08:20.901569 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901786 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901808 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901821 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901869 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.401854404 +0000 UTC m=+2.024768530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901964 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901976 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.901985 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.902014 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.402006318 +0000 UTC m=+2.024920604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.902058 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.902070 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.902079 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.902720 master-0 kubenswrapper[4430]: E1203 14:08:20.902108 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.402101201 +0000 UTC m=+2.025015477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.904105 master-0 kubenswrapper[4430]: E1203 14:08:20.903614 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:20.904105 master-0 kubenswrapper[4430]: E1203 14:08:20.903639 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.904105 master-0 kubenswrapper[4430]: E1203 14:08:20.903649 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.904105 master-0 kubenswrapper[4430]: E1203 14:08:20.903685 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.403674146 +0000 UTC m=+2.026588422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.905795 master-0 kubenswrapper[4430]: I1203 14:08:20.905403 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:08:20.905795 master-0 kubenswrapper[4430]: E1203 14:08:20.905698 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.905795 master-0 kubenswrapper[4430]: E1203 14:08:20.905720 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.905795 master-0 kubenswrapper[4430]: E1203 14:08:20.905740 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.905795 master-0 kubenswrapper[4430]: E1203 14:08:20.905789 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.405771005 +0000 UTC m=+2.028685281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.906866 master-0 kubenswrapper[4430]: I1203 14:08:20.906833 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:08:20.907784 master-0 kubenswrapper[4430]: I1203 14:08:20.907742 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:08:20.907856 master-0 kubenswrapper[4430]: E1203 14:08:20.907822 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.907856 master-0 kubenswrapper[4430]: E1203 14:08:20.907840 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.907856 master-0 kubenswrapper[4430]: E1203 14:08:20.907852 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.907952 master-0 kubenswrapper[4430]: E1203 14:08:20.907898 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.407884365 +0000 UTC m=+2.030798631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.908077 master-0 kubenswrapper[4430]: E1203 14:08:20.908050 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.908121 master-0 kubenswrapper[4430]: E1203 14:08:20.908080 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.908121 master-0 kubenswrapper[4430]: E1203 14:08:20.908095 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.908176 master-0 kubenswrapper[4430]: E1203 14:08:20.908136 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.408122942 +0000 UTC m=+2.031037218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.909998 master-0 kubenswrapper[4430]: I1203 14:08:20.909967 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:20.910252 master-0 kubenswrapper[4430]: E1203 14:08:20.910227 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:20.910298 master-0 kubenswrapper[4430]: E1203 14:08:20.910254 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.910298 master-0 kubenswrapper[4430]: E1203 14:08:20.910268 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.910363 master-0 kubenswrapper[4430]: E1203 14:08:20.910314 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.410303184 +0000 UTC m=+2.033217260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.910409 master-0 kubenswrapper[4430]: E1203 14:08:20.910374 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:20.910409 master-0 kubenswrapper[4430]: E1203 14:08:20.910390 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.910409 master-0 kubenswrapper[4430]: E1203 14:08:20.910401 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.910506 master-0 kubenswrapper[4430]: E1203 14:08:20.910483 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.410470779 +0000 UTC m=+2.033385045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.911043 master-0 kubenswrapper[4430]: E1203 14:08:20.911010 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.911043 master-0 kubenswrapper[4430]: E1203 14:08:20.911038 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.911131 master-0 kubenswrapper[4430]: E1203 14:08:20.911051 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.911131 master-0 kubenswrapper[4430]: E1203 14:08:20.911098 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.411085836 +0000 UTC m=+2.034000092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.912157 master-0 kubenswrapper[4430]: I1203 14:08:20.912114 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:20.912733 master-0 kubenswrapper[4430]: I1203 14:08:20.912702 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:20.914394 master-0 kubenswrapper[4430]: E1203 14:08:20.914363 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:20.914394 master-0 kubenswrapper[4430]: E1203 14:08:20.914391 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.914647 master-0 kubenswrapper[4430]: E1203 14:08:20.914405 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.914647 master-0 kubenswrapper[4430]: E1203 14:08:20.914515 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.414501474 +0000 UTC m=+2.037415750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.916414 master-0 kubenswrapper[4430]: E1203 14:08:20.916372 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:20.916505 master-0 kubenswrapper[4430]: E1203 14:08:20.916435 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:20.916505 master-0 kubenswrapper[4430]: E1203 14:08:20.916456 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.916588 master-0 kubenswrapper[4430]: E1203 14:08:20.916557 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.416528301 +0000 UTC m=+2.039442527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:20.918781 master-0 kubenswrapper[4430]: I1203 14:08:20.918740 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:20.932055 master-0 kubenswrapper[4430]: I1203 14:08:20.931995 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:08:20.932207 master-0 kubenswrapper[4430]: I1203 14:08:20.932159 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:08:20.932207 master-0 kubenswrapper[4430]: I1203 14:08:20.932169 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:20.932350 master-0 kubenswrapper[4430]: I1203 14:08:20.932283 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock" (OuterVolumeSpecName: "var-lock") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:20.935374 master-0 kubenswrapper[4430]: I1203 14:08:20.935335 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:20.935374 master-0 kubenswrapper[4430]: I1203 14:08:20.935367 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0b1e0884-ff54-419b-90d3-25f561a6391d-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:20.935475 master-0 kubenswrapper[4430]: I1203 14:08:20.935456 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.008248 master-0 kubenswrapper[4430]: E1203 14:08:21.007658 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:21.008248 master-0 kubenswrapper[4430]: E1203 14:08:21.008130 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:21.008248 master-0 kubenswrapper[4430]: E1203 14:08:21.008220 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.5081948 +0000 UTC m=+2.131108876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:21.008913 master-0 kubenswrapper[4430]: E1203 14:08:21.008886 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:21.008913 master-0 kubenswrapper[4430]: E1203 14:08:21.008911 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.008984 master-0 kubenswrapper[4430]: E1203 14:08:21.008922 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.008984 master-0 kubenswrapper[4430]: E1203 14:08:21.008973 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.508961042 +0000 UTC m=+2.131875118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.009333 master-0 kubenswrapper[4430]: E1203 14:08:21.009310 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:21.009370 master-0 kubenswrapper[4430]: E1203 14:08:21.009333 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.009370 master-0 kubenswrapper[4430]: E1203 14:08:21.009346 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.009441 master-0 kubenswrapper[4430]: E1203 14:08:21.009383 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.509373504 +0000 UTC m=+2.132287580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.010691 master-0 kubenswrapper[4430]: E1203 14:08:21.010654 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:21.010691 master-0 kubenswrapper[4430]: E1203 14:08:21.010680 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.010691 master-0 kubenswrapper[4430]: E1203 14:08:21.010690 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.010848 master-0 kubenswrapper[4430]: E1203 14:08:21.010723 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.510715172 +0000 UTC m=+2.133629248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.042540 master-0 kubenswrapper[4430]: I1203 14:08:21.042464 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:21.062842 master-0 kubenswrapper[4430]: E1203 14:08:21.062778 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:21.062842 master-0 kubenswrapper[4430]: E1203 14:08:21.062839 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.063078 master-0 kubenswrapper[4430]: E1203 14:08:21.062858 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.063078 master-0 kubenswrapper[4430]: E1203 14:08:21.062980 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.562921198 +0000 UTC m=+2.185835274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.081181 master-0 kubenswrapper[4430]: E1203 14:08:21.081071 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:21.081181 master-0 kubenswrapper[4430]: E1203 14:08:21.081121 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.081181 master-0 kubenswrapper[4430]: E1203 14:08:21.081137 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.081601 master-0 kubenswrapper[4430]: E1203 14:08:21.081233 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:21.581207289 +0000 UTC m=+2.204121365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.224050 master-0 kubenswrapper[4430]: I1203 14:08:21.223967 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:21.245282 master-0 kubenswrapper[4430]: I1203 14:08:21.245153 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:21.245282 master-0 kubenswrapper[4430]: I1203 14:08:21.245195 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.245282 master-0 kubenswrapper[4430]: I1203 14:08:21.245266 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:21.245282 master-0 kubenswrapper[4430]: I1203 14:08:21.245291 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245325 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245345 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245366 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245387 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245407 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245451 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245482 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245503 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245523 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245530 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245615 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245662 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.245638839 +0000 UTC m=+2.868552915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245668 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245653 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245713 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24570036 +0000 UTC m=+2.868614436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245710 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245730 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245734 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245781 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245833 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245835 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245755 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245854 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245760 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245905 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.245553 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245806 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245732 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.245724751 +0000 UTC m=+2.868638827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246093 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246067981 +0000 UTC m=+2.868982057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246113 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246104292 +0000 UTC m=+2.869018368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.245920 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246133 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246123642 +0000 UTC m=+2.869037718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246152 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246144253 +0000 UTC m=+2.869058329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246171 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246163284 +0000 UTC m=+2.869077360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246186 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246179454 +0000 UTC m=+2.869093530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246203 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246196345 +0000 UTC m=+2.869110411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246217 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246210205 +0000 UTC m=+2.869124281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246231 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246224255 +0000 UTC m=+2.869138331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.246274 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.246317 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.246341 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.246398 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246407 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: I1203 14:08:21.246440 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246446 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:21.246352 master-0 kubenswrapper[4430]: E1203 14:08:21.246475 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246487 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246430 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246516 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246475 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246440711 +0000 UTC m=+2.869354787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246546 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246537064 +0000 UTC m=+2.869451140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246556 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246573 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246604 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246591236 +0000 UTC m=+2.869505312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246611 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246651 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246640837 +0000 UTC m=+2.869554913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246668 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246661878 +0000 UTC m=+2.869575954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246660 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246686 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246676288 +0000 UTC m=+2.869590364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246702 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246694769 +0000 UTC m=+2.869608845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246708 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246719 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246710249 +0000 UTC m=+2.869624315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246738 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24673157 +0000 UTC m=+2.869645636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246763 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246795 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246827 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246850 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246880 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246873064 +0000 UTC m=+2.869787140 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246878 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246959 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246961 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.246966 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.246995 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.246985537 +0000 UTC m=+2.869899803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247014 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247005568 +0000 UTC m=+2.869919644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247018 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247043 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247048 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247051 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247042229 +0000 UTC m=+2.869956305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247103 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247101 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247124 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247128 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247144 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247155 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247144942 +0000 UTC m=+2.870059018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247179 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247169892 +0000 UTC m=+2.870083968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247180 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247210 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247203543 +0000 UTC m=+2.870117619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247232 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247260 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247284 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247290 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247295 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247286666 +0000 UTC m=+2.870200742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247318 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247335 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247346 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247358 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247351587 +0000 UTC m=+2.870265663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247374 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247365628 +0000 UTC m=+2.870279694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247345 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247380 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247389 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247392 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247384608 +0000 UTC m=+2.870298684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247489 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247516 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247540 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247551 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247565 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247584 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247576744 +0000 UTC m=+2.870490820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247619 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247622 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247646 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247640536 +0000 UTC m=+2.870554612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247658 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247666 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247686 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247678207 +0000 UTC m=+2.870592283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247708 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247730 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247724238 +0000 UTC m=+2.870638314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247731 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247753 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247744599 +0000 UTC m=+2.870658665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247770 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247763249 +0000 UTC m=+2.870677325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247782 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247789 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.2477816 +0000 UTC m=+2.870695666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247805 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24779929 +0000 UTC m=+2.870713366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247708 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247839 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247862 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247840 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247864 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247858072 +0000 UTC m=+2.870772148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.247902 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.247895723 +0000 UTC m=+2.870809799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247932 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247966 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.247997 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248019 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248078 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248101 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248127 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248146 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248166 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248186 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248229 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248250 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248271 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248292 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248327 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248352 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248371 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: I1203 14:08:21.248393 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.248484 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.248512 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.248542 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:21.248390 master-0 kubenswrapper[4430]: E1203 14:08:21.248580 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248600 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248612 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248626 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248645 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248652 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248670 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248689 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248704 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248598 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248580 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248740 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248744 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248713 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248784 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248810 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.248802159 +0000 UTC m=+2.871716235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248833 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248840 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248867 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24886024 +0000 UTC m=+2.871774316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248887 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248890 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248903 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.248897441 +0000 UTC m=+2.871811517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248891 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248923 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.248915392 +0000 UTC m=+2.871829468 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248943 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248968 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.248992 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.248948 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249020 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249049 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249032305 +0000 UTC m=+2.871946381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249072 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249077 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249096 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249089217 +0000 UTC m=+2.872003293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249118 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249116 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249138 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249132548 +0000 UTC m=+2.872046624 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249152 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249174 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249191 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249203 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24919434 +0000 UTC m=+2.872108416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249221 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.24921121 +0000 UTC m=+2.872125536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249229 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249237 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249229311 +0000 UTC m=+2.872143607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249000 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249248 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249242561 +0000 UTC m=+2.872156637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249260 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249253712 +0000 UTC m=+2.872167788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249274 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249267312 +0000 UTC m=+2.872181388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249291 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249283942 +0000 UTC m=+2.872198018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249303 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249298623 +0000 UTC m=+2.872212699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249317 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249311063 +0000 UTC m=+2.872225139 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249333 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249326164 +0000 UTC m=+2.872240230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249347 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249340294 +0000 UTC m=+2.872254370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249355 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249319 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249359 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249354694 +0000 UTC m=+2.872268960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249413 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249402196 +0000 UTC m=+2.872316512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249446 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249438847 +0000 UTC m=+2.872353113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249461 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249454907 +0000 UTC m=+2.872369193 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249293 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249476 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249470088 +0000 UTC m=+2.872384394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249514 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249504099 +0000 UTC m=+2.872418175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249542 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249576 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249575 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249611 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249600751 +0000 UTC m=+2.872515017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249624 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249653 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249662 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249653683 +0000 UTC m=+2.872567759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249701 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249718 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249736 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249742 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249736285 +0000 UTC m=+2.872650361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249786 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249774306 +0000 UTC m=+2.872688562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249814 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249805127 +0000 UTC m=+2.872719423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249814 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249838 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249829498 +0000 UTC m=+2.872743794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249860 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249852199 +0000 UTC m=+2.872766495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249817 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249875 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249868689 +0000 UTC m=+2.872782985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249901 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249947 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249949 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.249975 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.249967552 +0000 UTC m=+2.872881628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.249998 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250004 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250031 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250025473 +0000 UTC m=+2.872939549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250028 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250054 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250093 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250108 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250132 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250126166 +0000 UTC m=+2.873040242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250155 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250175 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250178 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250172858 +0000 UTC m=+2.873086924 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250211 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250202489 +0000 UTC m=+2.873116565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250220 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250223 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250217509 +0000 UTC m=+2.873131585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250243 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25023764 +0000 UTC m=+2.873151716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250259 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25025226 +0000 UTC m=+2.873166336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250300 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250337 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250380 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250404 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250430 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250461 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250454726 +0000 UTC m=+2.873368802 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250487 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250503 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250522 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250513617 +0000 UTC m=+2.873427703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250539 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250564 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250558859 +0000 UTC m=+2.873472935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250563 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250602 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250613 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: I1203 14:08:21.250628 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.252701 master-0 kubenswrapper[4430]: E1203 14:08:21.250641 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250635811 +0000 UTC m=+2.873549887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250674 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.250700 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250727 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.250734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250750 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250743944 +0000 UTC m=+2.873658020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250739 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250797 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250817 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250823 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250815316 +0000 UTC m=+2.873729392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250800 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250842 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250836387 +0000 UTC m=+2.873750463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.250772 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250878 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250854907 +0000 UTC m=+2.873768983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250930 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.250922269 +0000 UTC m=+2.873836345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250779 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.250952 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25094298 +0000 UTC m=+2.873857046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251009 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251049 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251079 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251104 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251130 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251154 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251158 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251199 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251205 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251262 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251316 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251323 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251366 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251369 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251218 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251210507 +0000 UTC m=+2.874124583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251398 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251391572 +0000 UTC m=+2.874305648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251461 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251442854 +0000 UTC m=+2.874356930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251492 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251483925 +0000 UTC m=+2.874397991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251514 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251506846 +0000 UTC m=+2.874420922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251538 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251529246 +0000 UTC m=+2.874443322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251559 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251551787 +0000 UTC m=+2.874465863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251577 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251571018 +0000 UTC m=+2.874485094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251648 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251677 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251711 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251731 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251763 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251786 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251817 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251847 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251868 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251898 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251891297 +0000 UTC m=+2.874805373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251912 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251918 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251941 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251934698 +0000 UTC m=+2.874848764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.251870 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251943 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251982 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.251998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.251975389 +0000 UTC m=+2.874889505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252030 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252033 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252051 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252044781 +0000 UTC m=+2.874958857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252045 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252066 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252059841 +0000 UTC m=+2.874973917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252087 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252106 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252114 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252125 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252150 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252127 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252122143 +0000 UTC m=+2.875036219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252170 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252191 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252186825 +0000 UTC m=+2.875100901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252201 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252241 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252258 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252266 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252260787 +0000 UTC m=+2.875174863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252280 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252274348 +0000 UTC m=+2.875188424 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252294 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252286648 +0000 UTC m=+2.875200724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252311 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252306128 +0000 UTC m=+2.875220194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252265 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252324 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252319379 +0000 UTC m=+2.875233455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252338 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252332029 +0000 UTC m=+2.875246105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252369 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252392 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252402 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252469 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252487 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252498 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252490074 +0000 UTC m=+2.875404150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252447 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252511 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252505204 +0000 UTC m=+2.875419280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252527 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252520465 +0000 UTC m=+2.875434541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252544 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252562 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252599 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252562586 +0000 UTC m=+2.875476662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252624 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252687 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252717 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252743 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252770 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252754631 +0000 UTC m=+2.875668747 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252788 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252818 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252811203 +0000 UTC m=+2.875725269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252821 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252852 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252877 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252870625 +0000 UTC m=+2.875784701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252896 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252926 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252938 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252953 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.252976 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.252997 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.252990638 +0000 UTC m=+2.875904704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253030 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253045 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253072 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25305676 +0000 UTC m=+2.875970876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253085 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253097 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253085411 +0000 UTC m=+2.875999517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253121 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253124 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253110981 +0000 UTC m=+2.876025087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253170 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: I1203 14:08:21.253183 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253190 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253184603 +0000 UTC m=+2.876098679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253223 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:21.257817 master-0 kubenswrapper[4430]: E1203 14:08:21.253243 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253238195 +0000 UTC m=+2.876152271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253243 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253260 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253254625 +0000 UTC m=+2.876168701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253286 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253302 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253353 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253397 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253516 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253506203 +0000 UTC m=+2.876420279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253512 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253563 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253587 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253611 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253594735 +0000 UTC m=+2.876508851 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253629 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253654 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253648817 +0000 UTC m=+2.876562893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253681 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253714 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253740 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253734919 +0000 UTC m=+2.876648995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253771 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253792 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253786351 +0000 UTC m=+2.876700427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253814 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253831 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253825902 +0000 UTC m=+2.876739978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253866 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253888 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253881773 +0000 UTC m=+2.876795849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.253683 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.253998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.253979746 +0000 UTC m=+2.876893862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254044 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254142 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254188 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254232 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254268 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254334 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254366 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254339766 +0000 UTC m=+2.877253842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254388 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254381197 +0000 UTC m=+2.877295273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254276 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254455 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254490 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254501 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25448586 +0000 UTC m=+2.877399976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254554 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254570 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254592 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254604 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254622 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254597294 +0000 UTC m=+2.877511410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254647 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254663 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254686 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254676446 +0000 UTC m=+2.877590522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254691 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254713 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254717377 +0000 UTC m=+2.877631453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254732 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254689 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254763 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254742 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254770 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254764978 +0000 UTC m=+2.877679054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254809 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254800679 +0000 UTC m=+2.877714755 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254812 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254822 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25481601 +0000 UTC m=+2.877730086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254835 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.25483015 +0000 UTC m=+2.877744226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254855 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254932 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254955 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.254945444 +0000 UTC m=+2.877859700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254983 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255013 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.254986 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255039 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255032836 +0000 UTC m=+2.877946912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.254997 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255093 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255076667 +0000 UTC m=+2.877990773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255124 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255114908 +0000 UTC m=+2.878029194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255153 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255186 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255220 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255253 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255292 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255288 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255324 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255351 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255358 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255362 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255392 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255360695 +0000 UTC m=+2.878274961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255404 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255445 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255465 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255470 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255445 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255409067 +0000 UTC m=+2.878323333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255554 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255582 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255589 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255577982 +0000 UTC m=+2.878492058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255622 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255633 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255617713 +0000 UTC m=+2.878531829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255658 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255646634 +0000 UTC m=+2.878560750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255682 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255668874 +0000 UTC m=+2.878582990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255706 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255693665 +0000 UTC m=+2.878607781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255728 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255718266 +0000 UTC m=+2.878632372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255761 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255803 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255817 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255848 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: I1203 14:08:21.255900 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255906 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255932 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.255915201 +0000 UTC m=+2.878829367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.256080 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.256062665 +0000 UTC m=+2.878976891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255958 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.256103 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.256091806 +0000 UTC m=+2.879006112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.255998 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.256147 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.256135497 +0000 UTC m=+2.879049773 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:21.261776 master-0 kubenswrapper[4430]: E1203 14:08:21.256172 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:22.256161938 +0000 UTC m=+2.879076234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:21.999138 master-0 kubenswrapper[4430]: I1203 14:08:21.997162 4430 request.go:700] Waited for 1.176784036s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa/token Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999209 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999370 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999559 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999631 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999695 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:21.999816 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000033 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000090 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000277 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000326 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000366 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000441 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.000869 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001023 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.001110 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001162 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001407 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001496 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001508 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001570 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.001553166 +0000 UTC m=+3.624467252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001639 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001668 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001687 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.001724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001753 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.001733851 +0000 UTC m=+3.624647927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001839 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001856 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001866 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001870 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001898 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.001886936 +0000 UTC m=+3.624801192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.001958 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001971 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001986 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.001996 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002001 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002032 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.002022959 +0000 UTC m=+3.624937215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.002056 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.002060 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002095 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002111 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002124 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002127 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.002134 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002159 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.002150993 +0000 UTC m=+3.625065259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.002176 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: I1203 14:08:22.002177 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002223 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002228 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002238 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.002171 master-0 kubenswrapper[4430]: E1203 14:08:22.002262 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002288 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002303 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002313 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002317 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.002297367 +0000 UTC m=+3.625211443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002346 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.002336018 +0000 UTC m=+3.625250184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002353 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002402 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002445 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002461 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002467 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002522 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002588 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.001463 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002657 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002735 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002741 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002774 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002781 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002789 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002837 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002855 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002907 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002936 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002946 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003078 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002990 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003037 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.003313 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003381 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003446 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003469 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003492 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003581 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003638 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003675 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003708 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003744 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.002630 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.003811 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003859 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003872 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003919 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003920 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003985 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.003992 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004018 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004072 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004115 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004162 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004201 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004215 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004221 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004260 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002473 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004236 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004295 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002528 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004261 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004353 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004367 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002577 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004451 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004463 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004469 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004474 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004511 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002580 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002625 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.002627 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004229 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004589 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004646 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004605 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004738 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004763 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004783 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004788 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004789 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004807 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004541 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.004270 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.004822 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.005068 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005278 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005329 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005361 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005386 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.005495 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005534 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005568 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005846 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005886 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005902 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005921 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.005985 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006009 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006033 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006132 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006226 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006255 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006280 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006304 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.006386 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006607 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006573949 +0000 UTC m=+3.629488175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006641 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006633321 +0000 UTC m=+3.629547397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006656 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006649601 +0000 UTC m=+3.629563677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006694 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006670862 +0000 UTC m=+3.629584938 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006717 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006708733 +0000 UTC m=+3.629622809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006736 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006728933 +0000 UTC m=+3.629643239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.006751 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.006744454 +0000 UTC m=+3.629658520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.006878 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.006989 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007015 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007036 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007081 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007155 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007229 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007255 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007269 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007321 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.00730275 +0000 UTC m=+3.630217056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007375 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007385 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007402 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007410 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007526 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007536 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007592 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007606 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007613 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007635 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.007628039 +0000 UTC m=+3.630542115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007648 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.00764276 +0000 UTC m=+3.630556836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007678 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007696 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007706 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007712 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007726 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007735 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007754 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.007745432 +0000 UTC m=+3.630659728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007790 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007802 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007809 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007820 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007833 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.007827545 +0000 UTC m=+3.630741621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007889 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007903 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007916 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007944 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.007935998 +0000 UTC m=+3.630850294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: I1203 14:08:22.007915 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007955 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007967 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007975 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.007998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.007991199 +0000 UTC m=+3.630905275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.010932 master-0 kubenswrapper[4430]: E1203 14:08:22.008046 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.008043 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008080 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008110 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.008128 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008133 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008159 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008194 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.008210 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008219 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008247 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008278 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008304 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008329 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008357 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008059 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008378 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008384 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008393 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008399 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008410 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008466 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008383 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008475 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008497 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008506 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008506 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008203 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008535 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008544 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008538 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008562 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008532 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008548 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008598 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008562 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008613 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008464 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.008455443 +0000 UTC m=+3.631369519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008637 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008576 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008599 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008620 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008670 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008644 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008600 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008401 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008621 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008250 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008843 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008855 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008290 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008906 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008917 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008331 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.005251 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.009209 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010011 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008482 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.008664 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.008643398 +0000 UTC m=+3.631557654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010069 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010087 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010076 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010059588 +0000 UTC m=+3.632973664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010184 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010166781 +0000 UTC m=+3.633081037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010218 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010208693 +0000 UTC m=+3.633122759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010243 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010235993 +0000 UTC m=+3.633150069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010271 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010257644 +0000 UTC m=+3.633171720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010294 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010285325 +0000 UTC m=+3.633199401 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010323 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010312486 +0000 UTC m=+3.633226562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: E1203 14:08:22.010341 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.010335296 +0000 UTC m=+3.633249372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.019219 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"0a57a8bdd5b6859edb5ca8bb103c32c2e252a56328e53f02c6630b3ca1df16e3"} Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.021243 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.021211 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"5fd119858007e4b5a1b4112671b0b0fdba132ce8265b36ea78a8e9fea5aa487a"} Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.021507 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:22.023823 master-0 kubenswrapper[4430]: I1203 14:08:22.021702 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: I1203 14:08:22.322666 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: I1203 14:08:22.322815 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: E1203 14:08:22.322842 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: I1203 14:08:22.322867 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: E1203 14:08:22.322938 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.322914661 +0000 UTC m=+4.945828747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:22.322986 master-0 kubenswrapper[4430]: I1203 14:08:22.322986 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323033 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323071 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323103 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323027 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323162 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323215 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323241 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323161 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323249 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32324 +0000 UTC m=+4.946154096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323118 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323404 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323375724 +0000 UTC m=+4.946289830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323461 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323445526 +0000 UTC m=+4.946359642 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323128 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323487 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323473917 +0000 UTC m=+4.946388023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323046 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323525 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323567 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323543689 +0000 UTC m=+4.946457765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323620 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323665 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323692 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323678652 +0000 UTC m=+4.946592758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323747 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323780 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323772745 +0000 UTC m=+4.946686821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323779 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: I1203 14:08:22.323826 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323891 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323911 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323921 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323828717 +0000 UTC m=+4.946742893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:22.323931 master-0 kubenswrapper[4430]: E1203 14:08:22.323983 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.323967041 +0000 UTC m=+4.946881227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324012 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324000142 +0000 UTC m=+4.946914258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324082 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324147 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324122565 +0000 UTC m=+4.947036701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324183 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324197 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324241 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324280 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324320 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324333 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324317001 +0000 UTC m=+4.947231087 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324357 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324347151 +0000 UTC m=+4.947261237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324372 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324404 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324395043 +0000 UTC m=+4.947309239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324458 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324491 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324523 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324522 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324554 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324580 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324583 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324614 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324590 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324634 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324602969 +0000 UTC m=+4.947517205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324653 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324662 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324677 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324669911 +0000 UTC m=+4.947583987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324676 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324696 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324686691 +0000 UTC m=+4.947600777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324723 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324772 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324780 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324800 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324792674 +0000 UTC m=+4.947706760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324820 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324811485 +0000 UTC m=+4.947725571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324834 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324857 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324868 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.324895 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324840 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324918 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324893717 +0000 UTC m=+4.947808073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324949 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324938538 +0000 UTC m=+4.947852634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324966 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.324971 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.324962089 +0000 UTC m=+4.947876175 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325036 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325021351 +0000 UTC m=+4.947935467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325058 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325047971 +0000 UTC m=+4.947962077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325106 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325155 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325161 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325180 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325170645 +0000 UTC m=+4.948084731 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325226 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325258 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325251097 +0000 UTC m=+4.948165193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325283 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325347 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325374 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325390 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325409 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325395981 +0000 UTC m=+4.948310067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325457 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325466 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325489 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325480544 +0000 UTC m=+4.948394640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325524 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325530 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325569 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325577 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325538195 +0000 UTC m=+4.948452521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325592 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325624 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325614618 +0000 UTC m=+4.948528704 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325626 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325662 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325682 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325711 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32569457 +0000 UTC m=+4.948608676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325739 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325725291 +0000 UTC m=+4.948639377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325763 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325769 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325826 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325854 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325843874 +0000 UTC m=+4.948757960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325889 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325922 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325914406 +0000 UTC m=+4.948828492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325886 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: E1203 14:08:22.325940 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:22.325817 master-0 kubenswrapper[4430]: I1203 14:08:22.325977 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.325987 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.325977248 +0000 UTC m=+4.948891334 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326022 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326031 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326053 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326062 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32605453 +0000 UTC m=+4.948968626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326087 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326112 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326121 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326141 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326132242 +0000 UTC m=+4.949046328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326166 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326178 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326209 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326199534 +0000 UTC m=+4.949113630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326233 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326262 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326253946 +0000 UTC m=+4.949168032 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326231 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326283 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326314 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326305017 +0000 UTC m=+4.949219103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326316 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326359 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326369 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326389 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326379949 +0000 UTC m=+4.949294045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326445 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326463 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326499 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326514 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326543 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326615 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326624 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326648 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326637747 +0000 UTC m=+4.949551843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326677 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326711 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326741 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.326783 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326715 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326939 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326750 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326807 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326828 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.326814712 +0000 UTC m=+4.949728888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327045 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327055 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327030058 +0000 UTC m=+4.949944174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326860 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327083 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327071249 +0000 UTC m=+4.949985355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327123 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327148 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327125451 +0000 UTC m=+4.950039567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326882 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327218 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327203383 +0000 UTC m=+4.950117709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327217 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327266 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327272 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327284 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327251954 +0000 UTC m=+4.950166310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327228 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327321 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327307466 +0000 UTC m=+4.950221572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327378 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327364387 +0000 UTC m=+4.950278673 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.326929 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327396 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327387058 +0000 UTC m=+4.950301134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327477 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32745994 +0000 UTC m=+4.950374056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327508 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327494661 +0000 UTC m=+4.950408777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327533 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327519642 +0000 UTC m=+4.950433758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327568 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327556273 +0000 UTC m=+4.950470389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327646 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327700 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327714 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327748 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327735128 +0000 UTC m=+4.950649234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327789 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327822 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327843 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327883 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327869072 +0000 UTC m=+4.950783198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327887 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327919 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.327924 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327945 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327930814 +0000 UTC m=+4.950845080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327951 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327969 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327956644 +0000 UTC m=+4.950870760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.327990 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.327981155 +0000 UTC m=+4.950895231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328059 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328103 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328124 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328143 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328166 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328204 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328226 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328246 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328258 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328268 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328305 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328309 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328351 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328369 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328314 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328297644 +0000 UTC m=+4.951211920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328384 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328400 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328407 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328459 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328398747 +0000 UTC m=+4.951312843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328270 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328486 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328472779 +0000 UTC m=+4.951387095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328514 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32850062 +0000 UTC m=+4.951414926 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328538 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328524971 +0000 UTC m=+4.951439307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328577 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328564462 +0000 UTC m=+4.951478768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328605 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328593712 +0000 UTC m=+4.951507798 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328631 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328619433 +0000 UTC m=+4.951533739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328662 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328649624 +0000 UTC m=+4.951563930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328682 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328672945 +0000 UTC m=+4.951587261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328714 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328803 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328821 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328839 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328864 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32885018 +0000 UTC m=+4.951764256 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328885 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328899 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328929 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328945 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328956 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328947203 +0000 UTC m=+4.951861299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: I1203 14:08:22.328918 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.328990 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.328968423 +0000 UTC m=+4.951882539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.329017 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:22.332045 master-0 kubenswrapper[4430]: E1203 14:08:22.329019 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329019 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329005344 +0000 UTC m=+4.951919460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329045 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329037515 +0000 UTC m=+4.951951601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329072 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329121 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329138 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329150 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329158 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329151638 +0000 UTC m=+4.952065714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329177 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329198 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329203 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329215 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32921033 +0000 UTC m=+4.952124396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329232 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329254 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329261 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329295 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329284782 +0000 UTC m=+4.952199008 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329274 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329331 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329358 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329350974 +0000 UTC m=+4.952265060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329366 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329388 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329381655 +0000 UTC m=+4.952295731 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329330 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329390 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329516 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329529 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329402 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329394825 +0000 UTC m=+4.952308911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329560 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329570 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32955929 +0000 UTC m=+4.952473376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329594 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329629 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329600 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329457 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329670 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329627 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329643 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329635772 +0000 UTC m=+4.952549848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329700 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329691104 +0000 UTC m=+4.952605190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329732 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329715264 +0000 UTC m=+4.952629550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329780 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329818 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329850 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329855 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329845818 +0000 UTC m=+4.952759894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329875 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329887 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329878029 +0000 UTC m=+4.952792115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329921 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.32991069 +0000 UTC m=+4.952824776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329935 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329826 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.329949 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.329962 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.329955731 +0000 UTC m=+4.952869827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330024 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330049 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330029013 +0000 UTC m=+4.952943139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330075 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330062314 +0000 UTC m=+4.952976430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330233 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330219339 +0000 UTC m=+4.953133425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330338 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330404 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330490 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330534 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330578 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330627 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330646 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330665 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330625 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330694 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330663 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330653371 +0000 UTC m=+4.953567457 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330792 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330773775 +0000 UTC m=+4.953687871 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330815 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330804095 +0000 UTC m=+4.953718181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330840 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330833746 +0000 UTC m=+4.953747832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330872 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330919 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.330951 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330959 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.330973 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.33095019 +0000 UTC m=+4.953864306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331009 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.330994291 +0000 UTC m=+4.953908397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331023 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331007 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331035 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331053 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331076 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331086 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331077083 +0000 UTC m=+4.953991179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331132 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331120074 +0000 UTC m=+4.954034190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331153 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331163 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331208 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331194147 +0000 UTC m=+4.954108263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331243 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331264 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331281 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331270579 +0000 UTC m=+4.954184685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331396 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331514 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331565 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331404 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331634 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331616 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331472 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331569 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331705 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331674 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.33166195 +0000 UTC m=+4.954576056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331729 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331742 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331731032 +0000 UTC m=+4.954645118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331752 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331767 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331810 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331791844 +0000 UTC m=+4.954705960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331826 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331854 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331859 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331851135 +0000 UTC m=+4.954765231 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331898 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.331936 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.331927887 +0000 UTC m=+4.954841973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331967 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.331998 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332032 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332059 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332042111 +0000 UTC m=+4.954956227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332075 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.332033 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332103 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332084992 +0000 UTC m=+4.954999348 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332131 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332147 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332129283 +0000 UTC m=+4.955043589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.332134 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332178 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332180 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332167224 +0000 UTC m=+4.955081340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332272 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332258057 +0000 UTC m=+4.955172173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.332324 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: E1203 14:08:22.332348 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:22.341969 master-0 kubenswrapper[4430]: I1203 14:08:22.332373 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332382 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.33237345 +0000 UTC m=+4.955287526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332457 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332414051 +0000 UTC m=+4.955328157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332469 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332480 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332519 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332562 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332573 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332599 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332587116 +0000 UTC m=+4.955501202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332619 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332609007 +0000 UTC m=+4.955523093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332657 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332699 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332733 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332660 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332792 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332807 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332833 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332817363 +0000 UTC m=+4.955731479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332766 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332847 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332868 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332854674 +0000 UTC m=+4.955768790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332976 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332965267 +0000 UTC m=+4.955879353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.332971 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.332997 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.332989368 +0000 UTC m=+4.955903454 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333015 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333006818 +0000 UTC m=+4.955920904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333061 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333065 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333107 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333093721 +0000 UTC m=+4.956007827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333147 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333200 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333238 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333261 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333198 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333268 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333258525 +0000 UTC m=+4.956172601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333368 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333407 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333388299 +0000 UTC m=+4.956302415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333479 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333486 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333466421 +0000 UTC m=+4.956380537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333525 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333511793 +0000 UTC m=+4.956425909 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333561 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333610 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333660 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333676 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333710 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333725 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333714428 +0000 UTC m=+4.956628524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333755 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333775 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333810 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333828 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333837 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333830362 +0000 UTC m=+4.956744438 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333872 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333875 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333892 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333886933 +0000 UTC m=+4.956801009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333910 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.333920 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333934 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333928704 +0000 UTC m=+4.956842780 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333976 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333962405 +0000 UTC m=+4.956876511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.333985 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334013 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.333997556 +0000 UTC m=+4.956911872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334095 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334104 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334094359 +0000 UTC m=+4.957008435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334146 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334173 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334166801 +0000 UTC m=+4.957080877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334204 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334276 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334292 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334331 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334347 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334330776 +0000 UTC m=+4.957244892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334381 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334391 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334463 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334476 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334482 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334510 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334440 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334551 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334565 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334527 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334513981 +0000 UTC m=+4.957428097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334657 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334643385 +0000 UTC m=+4.957557471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334686 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334698 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334719 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334716367 +0000 UTC m=+4.957630443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334752 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334755 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334747138 +0000 UTC m=+4.957661224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334775 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334766558 +0000 UTC m=+4.957680644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334796 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334787099 +0000 UTC m=+4.957701195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334823 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334860 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334866 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334921 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.334949 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.334869371 +0000 UTC m=+4.957783447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.334973 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335023 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335063 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335111 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335072 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335151 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335213 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335077577 +0000 UTC m=+4.957991653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335244 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335229062 +0000 UTC m=+4.958143138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335265 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335255032 +0000 UTC m=+4.958169178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335305 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335284263 +0000 UTC m=+4.958198579 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335348 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335376 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335410 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335508 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335506 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335536 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.33552047 +0000 UTC m=+4.958434556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335556 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335549231 +0000 UTC m=+4.958463327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335512 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335457 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335577 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335568381 +0000 UTC m=+4.958482477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335558 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335672 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335647363 +0000 UTC m=+4.958561669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: E1203 14:08:22.335698 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335688815 +0000 UTC m=+4.958602891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335723 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335747 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335774 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335802 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:22.350263 master-0 kubenswrapper[4430]: I1203 14:08:22.335826 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: I1203 14:08:22.335853 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.335862 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: I1203 14:08:22.335878 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: I1203 14:08:22.335908 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.335934 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335915781 +0000 UTC m=+4.958829867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.335945 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.335965 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.335987 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336003 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.335992263 +0000 UTC m=+4.958906349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336000 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336019 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336011964 +0000 UTC m=+4.958926060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336053 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336042625 +0000 UTC m=+4.958956721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336056 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: I1203 14:08:22.336009 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336071 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336110 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336074 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336064555 +0000 UTC m=+4.958978641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336031 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336167 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336144918 +0000 UTC m=+4.959059214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336211 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336194269 +0000 UTC m=+4.959108605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: I1203 14:08:22.336295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336345 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336334743 +0000 UTC m=+4.959248819 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336366 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336358284 +0000 UTC m=+4.959272360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336470 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:22.360667 master-0 kubenswrapper[4430]: E1203 14:08:22.336501 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.336494308 +0000 UTC m=+4.959408384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:22.584056 master-0 kubenswrapper[4430]: I1203 14:08:22.583875 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: E1203 14:08:22.584080 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: I1203 14:08:22.584155 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: I1203 14:08:22.584189 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: I1203 14:08:22.584173 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: I1203 14:08:22.584219 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: E1203 14:08:22.584263 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:22.584536 master-0 kubenswrapper[4430]: I1203 14:08:22.584274 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:22.584761 master-0 kubenswrapper[4430]: E1203 14:08:22.584521 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:22.584761 master-0 kubenswrapper[4430]: E1203 14:08:22.584631 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:22.584761 master-0 kubenswrapper[4430]: E1203 14:08:22.584724 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:22.584860 master-0 kubenswrapper[4430]: E1203 14:08:22.584792 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634082 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634114 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634135 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634196 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.134175439 +0000 UTC m=+3.757089515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634306 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634326 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634336 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634378 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.134367364 +0000 UTC m=+3.757281440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634433 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634448 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634458 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634533 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.134487697 +0000 UTC m=+3.757401773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634574 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634588 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634599 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634631 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.134622221 +0000 UTC m=+3.757536297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634117 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634652 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.634687 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.134679243 +0000 UTC m=+3.757593319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.635007 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.635026 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.635054 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: E1203 14:08:22.635090 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.135079704 +0000 UTC m=+3.757993790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.635869 master-0 kubenswrapper[4430]: I1203 14:08:22.635238 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: I1203 14:08:22.636638 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637051 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637069 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637079 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637118 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.137106602 +0000 UTC m=+3.760020688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637155 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637167 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637178 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.637579 master-0 kubenswrapper[4430]: E1203 14:08:22.637209 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.137201205 +0000 UTC m=+3.760115291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.639434 master-0 kubenswrapper[4430]: I1203 14:08:22.639385 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:22.644709 master-0 kubenswrapper[4430]: E1203 14:08:22.644662 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.644709 master-0 kubenswrapper[4430]: E1203 14:08:22.644699 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.644831 master-0 kubenswrapper[4430]: E1203 14:08:22.644713 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.644831 master-0 kubenswrapper[4430]: E1203 14:08:22.644772 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.14475449 +0000 UTC m=+3.767668566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.644831 master-0 kubenswrapper[4430]: E1203 14:08:22.644817 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:22.644831 master-0 kubenswrapper[4430]: E1203 14:08:22.644828 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.644837 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.644873 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.144865703 +0000 UTC m=+3.767779789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: I1203 14:08:22.645743 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: I1203 14:08:22.646118 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: I1203 14:08:22.646210 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646291 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646376 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646393 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646407 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646482 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.146461719 +0000 UTC m=+3.769375795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646391 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646547 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646596 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.146584392 +0000 UTC m=+3.769498648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646329 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646630 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646641 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647081 master-0 kubenswrapper[4430]: E1203 14:08:22.646923 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.146908151 +0000 UTC m=+3.769822357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.647998 master-0 kubenswrapper[4430]: E1203 14:08:22.647444 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:22.647998 master-0 kubenswrapper[4430]: E1203 14:08:22.647470 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.647998 master-0 kubenswrapper[4430]: E1203 14:08:22.647484 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.648648 master-0 kubenswrapper[4430]: E1203 14:08:22.648479 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.148462166 +0000 UTC m=+3.771376242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.649860 master-0 kubenswrapper[4430]: I1203 14:08:22.649834 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:08:22.650721 master-0 kubenswrapper[4430]: E1203 14:08:22.650691 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:22.650721 master-0 kubenswrapper[4430]: E1203 14:08:22.650718 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.650898 master-0 kubenswrapper[4430]: E1203 14:08:22.650729 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.650898 master-0 kubenswrapper[4430]: E1203 14:08:22.650785 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.150772741 +0000 UTC m=+3.773686997 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.650898 master-0 kubenswrapper[4430]: I1203 14:08:22.650856 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:08:22.655023 master-0 kubenswrapper[4430]: E1203 14:08:22.654979 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:22.655023 master-0 kubenswrapper[4430]: E1203 14:08:22.655004 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.655023 master-0 kubenswrapper[4430]: E1203 14:08:22.655016 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.655632 master-0 kubenswrapper[4430]: E1203 14:08:22.655058 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.155043663 +0000 UTC m=+3.777957739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.655632 master-0 kubenswrapper[4430]: E1203 14:08:22.655119 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:22.655632 master-0 kubenswrapper[4430]: E1203 14:08:22.655129 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:22.655632 master-0 kubenswrapper[4430]: E1203 14:08:22.655334 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.655632 master-0 kubenswrapper[4430]: E1203 14:08:22.655375 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.155367932 +0000 UTC m=+3.778281998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:22.656754 master-0 kubenswrapper[4430]: I1203 14:08:22.656631 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:22.662635 master-0 kubenswrapper[4430]: I1203 14:08:22.662274 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:08:22.662635 master-0 kubenswrapper[4430]: I1203 14:08:22.662522 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:22.665258 master-0 kubenswrapper[4430]: E1203 14:08:22.664558 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:22.665258 master-0 kubenswrapper[4430]: E1203 14:08:22.664588 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:22.665258 master-0 kubenswrapper[4430]: E1203 14:08:22.664672 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:23.164646787 +0000 UTC m=+3.787561033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:22.668876 master-0 kubenswrapper[4430]: I1203 14:08:22.668829 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:22.670869 master-0 kubenswrapper[4430]: I1203 14:08:22.670832 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:22.711793 master-0 kubenswrapper[4430]: I1203 14:08:22.711719 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:08:23.030173 master-0 kubenswrapper[4430]: I1203 14:08:23.029629 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"4f2dfbfa8ca94b5824611cefb87c0e9841e76fbc58d3e7950aee65cdd550fb55"} Dec 03 14:08:23.031532 master-0 kubenswrapper[4430]: I1203 14:08:23.031497 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"bbead94692b339bd07c2e48969b5e8e5d7bc96f40b82a72fbab8051c0835433b"} Dec 03 14:08:23.033841 master-0 kubenswrapper[4430]: I1203 14:08:23.033379 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="ea50e18c6aeeaa86ed17b3bf50d0629d0766ba9ddae3277ebfe0d56734fc3fca" exitCode=0 Dec 03 14:08:23.033841 master-0 kubenswrapper[4430]: I1203 14:08:23.033465 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"ea50e18c6aeeaa86ed17b3bf50d0629d0766ba9ddae3277ebfe0d56734fc3fca"} Dec 03 14:08:23.035501 master-0 kubenswrapper[4430]: I1203 14:08:23.035452 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"7f832c38908426c04be74049b75276a224a4c272c95ea669c46fb755057dad06"} Dec 03 14:08:23.037145 master-0 kubenswrapper[4430]: I1203 14:08:23.037068 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"153f16943f56e9f517fe3bb5b6cd273abc62130b5202c99cf6cd2be09af776cd"} Dec 03 14:08:23.039073 master-0 kubenswrapper[4430]: I1203 14:08:23.039039 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"83069a1a0084abec38cf08ffc3864c6dc387ece2da2dbaeed14a4f8878ec03d9"} Dec 03 14:08:23.040791 master-0 kubenswrapper[4430]: I1203 14:08:23.040757 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"0beebef07f0cead91e9334247c292ae81789441d58dee39e91d6971b5f65df56"} Dec 03 14:08:23.043302 master-0 kubenswrapper[4430]: I1203 14:08:23.043224 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"d0b962df8004724dd892c3623bab7db6773c671e1639de8604eb91c403982d54"} Dec 03 14:08:23.079275 master-0 kubenswrapper[4430]: I1203 14:08:23.079197 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:23.079275 master-0 kubenswrapper[4430]: I1203 14:08:23.079267 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: I1203 14:08:23.079314 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: I1203 14:08:23.079364 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: I1203 14:08:23.079433 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079439 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: I1203 14:08:23.079470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079482 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079497 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079577 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079630 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079654 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079663 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079679 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079687 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079696 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079708 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.079751 master-0 kubenswrapper[4430]: E1203 14:08:23.079776 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079599 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079874 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079884 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079723 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079923 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.079735 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.079714972 +0000 UTC m=+5.702629038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.080165 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.080120664 +0000 UTC m=+5.703034860 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080215 master-0 kubenswrapper[4430]: E1203 14:08:23.080208 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.080197636 +0000 UTC m=+5.703111902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080514 master-0 kubenswrapper[4430]: E1203 14:08:23.080238 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.080223277 +0000 UTC m=+5.703137553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.080514 master-0 kubenswrapper[4430]: E1203 14:08:23.080272 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.080261368 +0000 UTC m=+5.703175654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.080514 master-0 kubenswrapper[4430]: E1203 14:08:23.080304 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.080292709 +0000 UTC m=+5.703206965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081177 master-0 kubenswrapper[4430]: I1203 14:08:23.081118 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:23.081292 master-0 kubenswrapper[4430]: I1203 14:08:23.081253 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:23.081339 master-0 kubenswrapper[4430]: I1203 14:08:23.081305 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:23.081375 master-0 kubenswrapper[4430]: E1203 14:08:23.081339 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081375 master-0 kubenswrapper[4430]: E1203 14:08:23.081360 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.081375 master-0 kubenswrapper[4430]: E1203 14:08:23.081369 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081390 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081408 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.08139786 +0000 UTC m=+5.704312116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081433 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: I1203 14:08:23.081343 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081445 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081490 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.081463042 +0000 UTC m=+5.704377308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081499 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.081524 master-0 kubenswrapper[4430]: E1203 14:08:23.081512 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: E1203 14:08:23.081547 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.081538324 +0000 UTC m=+5.704452590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: E1203 14:08:23.081449 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: I1203 14:08:23.081606 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: E1203 14:08:23.081613 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: E1203 14:08:23.081724 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: E1203 14:08:23.081688 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.081811 master-0 kubenswrapper[4430]: I1203 14:08:23.081773 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081782 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081894 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081856 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081919 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081929 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: E1203 14:08:23.081878 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.081866943 +0000 UTC m=+5.704781199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082074 master-0 kubenswrapper[4430]: I1203 14:08:23.082069 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082088 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.082073169 +0000 UTC m=+5.704987415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082115 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.08210501 +0000 UTC m=+5.705019296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082150 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082171 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082181 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082317 master-0 kubenswrapper[4430]: E1203 14:08:23.082290 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.082277495 +0000 UTC m=+5.705191711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.082638 master-0 kubenswrapper[4430]: I1203 14:08:23.082492 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:23.082638 master-0 kubenswrapper[4430]: I1203 14:08:23.082533 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:23.082638 master-0 kubenswrapper[4430]: I1203 14:08:23.082564 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:23.082638 master-0 kubenswrapper[4430]: I1203 14:08:23.082593 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:23.082767 master-0 kubenswrapper[4430]: I1203 14:08:23.082661 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:23.082801 master-0 kubenswrapper[4430]: I1203 14:08:23.082781 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:23.082856 master-0 kubenswrapper[4430]: I1203 14:08:23.082830 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:23.082945 master-0 kubenswrapper[4430]: I1203 14:08:23.082906 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: I1203 14:08:23.083020 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: I1203 14:08:23.083179 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: I1203 14:08:23.083273 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083180 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083313 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083337 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083205 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083358 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083373 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083385 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083341 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083447 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083463 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.083436498 +0000 UTC m=+5.706350574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083487 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.083475939 +0000 UTC m=+5.706390005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083488 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083518 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.083509 master-0 kubenswrapper[4430]: E1203 14:08:23.083532 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083571 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.083561152 +0000 UTC m=+5.706475418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: I1203 14:08:23.083567 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083599 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.083583352 +0000 UTC m=+5.706497598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083625 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083636 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083645 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083669 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.083662565 +0000 UTC m=+5.706576641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083260 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: I1203 14:08:23.083689 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: I1203 14:08:23.083727 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: I1203 14:08:23.083774 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083693 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083842 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083872 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.0838645 +0000 UTC m=+5.706778576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083909 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083921 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083929 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083984 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083996 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084003 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084023 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084016515 +0000 UTC m=+5.706930591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: I1203 14:08:23.083982 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084032 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084048 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084056 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084088 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084079866 +0000 UTC m=+5.706994152 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.083266 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084120 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.084102 master-0 kubenswrapper[4430]: E1203 14:08:23.084128 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083279 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084174 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084186 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083312 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084232 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084242 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083322 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084281 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084292 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083205 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084331 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084340 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083244 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084377 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084386 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.083814 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084447 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084461 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084151 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084145678 +0000 UTC m=+5.707059754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: I1203 14:08:23.084509 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: I1203 14:08:23.084622 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084629 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084643 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084651 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084680 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084672903 +0000 UTC m=+5.707586979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084698 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084693184 +0000 UTC m=+5.707607260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084713 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084706074 +0000 UTC m=+5.707620150 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084728 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084719775 +0000 UTC m=+5.707633851 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084747 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084737045 +0000 UTC m=+5.707651341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084759 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084764 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084755996 +0000 UTC m=+5.707670292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084770 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084781 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084785 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084776696 +0000 UTC m=+5.707690982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084807 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084799947 +0000 UTC m=+5.707714023 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.085032 master-0 kubenswrapper[4430]: E1203 14:08:23.084823 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:25.084817577 +0000 UTC m=+5.707731653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.185983 master-0 kubenswrapper[4430]: I1203 14:08:23.185916 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:23.185983 master-0 kubenswrapper[4430]: I1203 14:08:23.185969 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186013 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186039 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186113 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186136 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186235 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186271 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186267 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186322 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186371 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186439 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186449 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186466 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186477 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186285 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186510 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186374 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186567 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186578 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186584 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186597 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186605 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186604 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186476 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.186453473 +0000 UTC m=+4.809367739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186689 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186708 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186721 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186745 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186795 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.186770352 +0000 UTC m=+4.809684428 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186824 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186836 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186844 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186629 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: E1203 14:08:23.186876 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186916 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.186984 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.187045 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:23.187161 master-0 kubenswrapper[4430]: I1203 14:08:23.187144 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187459 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187479 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187490 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187574 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187586 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187595 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187681 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187693 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187702 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187703 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187726 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187737 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187802 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187821 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.187830 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.188232 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.188214473 +0000 UTC m=+4.811128699 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189022 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.188981785 +0000 UTC m=+4.811895991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189079 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189061517 +0000 UTC m=+4.811975793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189104 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189093548 +0000 UTC m=+4.812007834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189133 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189121699 +0000 UTC m=+4.812035985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189161 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.18914885 +0000 UTC m=+4.812063146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189186 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189179111 +0000 UTC m=+4.812093377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189214 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189203131 +0000 UTC m=+4.812117417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189236 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189227912 +0000 UTC m=+4.812142198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189256 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189246093 +0000 UTC m=+4.812160359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.189286 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.189271443 +0000 UTC m=+4.812185659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: I1203 14:08:23.192684 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.192853 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:23.193008 master-0 kubenswrapper[4430]: E1203 14:08:23.192875 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.197690 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.197645442 +0000 UTC m=+4.820559518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: I1203 14:08:23.198609 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: I1203 14:08:23.198644 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: I1203 14:08:23.198671 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: I1203 14:08:23.198694 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198755 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198797 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198809 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198837 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198862 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198870 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.198849356 +0000 UTC m=+4.821763432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198874 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198934 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.198912478 +0000 UTC m=+4.821826764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198945 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198959 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.198977 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.199002 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.199022 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.199030 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.199019451 +0000 UTC m=+4.821933697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:23.202179 master-0 kubenswrapper[4430]: E1203 14:08:23.199085 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:24.199075033 +0000 UTC m=+4.821989269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:23.583979 master-0 kubenswrapper[4430]: I1203 14:08:23.583897 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:23.583979 master-0 kubenswrapper[4430]: I1203 14:08:23.583942 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584005 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584078 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584092 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584147 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: E1203 14:08:23.584092 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584175 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584307 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.583918 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584324 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: E1203 14:08:23.584327 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584349 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584383 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584371 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:23.584371 master-0 kubenswrapper[4430]: I1203 14:08:23.584375 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584392 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584445 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584481 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584493 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584494 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584513 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584586 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584593 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: E1203 14:08:23.584596 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584628 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584637 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584645 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584656 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584484 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584680 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584677 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584698 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584605 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584711 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584749 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: E1203 14:08:23.584913 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:23.584953 master-0 kubenswrapper[4430]: I1203 14:08:23.584987 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585026 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585064 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585096 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585130 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585160 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585190 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585220 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585271 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585315 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585343 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585372 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585400 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585451 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: E1203 14:08:23.585511 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585555 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585588 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585620 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585644 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585670 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585705 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: E1203 14:08:23.585793 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585833 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:23.585841 master-0 kubenswrapper[4430]: I1203 14:08:23.585872 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: I1203 14:08:23.585907 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.585963 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: I1203 14:08:23.586015 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: I1203 14:08:23.586049 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586162 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586293 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586363 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586447 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586505 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586553 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:23.586623 master-0 kubenswrapper[4430]: E1203 14:08:23.586619 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:23.586949 master-0 kubenswrapper[4430]: E1203 14:08:23.586688 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:23.586949 master-0 kubenswrapper[4430]: E1203 14:08:23.586758 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:23.586949 master-0 kubenswrapper[4430]: E1203 14:08:23.586845 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:23.586949 master-0 kubenswrapper[4430]: E1203 14:08:23.586918 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:23.587153 master-0 kubenswrapper[4430]: E1203 14:08:23.587113 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:23.587205 master-0 kubenswrapper[4430]: E1203 14:08:23.587188 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:23.587296 master-0 kubenswrapper[4430]: E1203 14:08:23.587265 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:23.587391 master-0 kubenswrapper[4430]: E1203 14:08:23.587360 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:23.587494 master-0 kubenswrapper[4430]: E1203 14:08:23.587467 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:23.587579 master-0 kubenswrapper[4430]: E1203 14:08:23.587556 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:23.587803 master-0 kubenswrapper[4430]: E1203 14:08:23.587673 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:23.587803 master-0 kubenswrapper[4430]: E1203 14:08:23.587745 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:23.587894 master-0 kubenswrapper[4430]: E1203 14:08:23.587854 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:23.588239 master-0 kubenswrapper[4430]: E1203 14:08:23.587927 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:23.588239 master-0 kubenswrapper[4430]: E1203 14:08:23.588149 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:23.588361 master-0 kubenswrapper[4430]: E1203 14:08:23.588263 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:23.588361 master-0 kubenswrapper[4430]: E1203 14:08:23.588344 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:23.588479 master-0 kubenswrapper[4430]: E1203 14:08:23.588442 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:23.588528 master-0 kubenswrapper[4430]: E1203 14:08:23.588514 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:23.588645 master-0 kubenswrapper[4430]: E1203 14:08:23.588581 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:23.588708 master-0 kubenswrapper[4430]: E1203 14:08:23.588657 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:23.588754 master-0 kubenswrapper[4430]: E1203 14:08:23.588734 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:23.589217 master-0 kubenswrapper[4430]: E1203 14:08:23.588922 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:23.589217 master-0 kubenswrapper[4430]: E1203 14:08:23.589062 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:23.589217 master-0 kubenswrapper[4430]: E1203 14:08:23.589189 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:23.589350 master-0 kubenswrapper[4430]: E1203 14:08:23.589295 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:23.589554 master-0 kubenswrapper[4430]: E1203 14:08:23.589390 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:23.589554 master-0 kubenswrapper[4430]: E1203 14:08:23.589517 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:23.589651 master-0 kubenswrapper[4430]: E1203 14:08:23.589615 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:23.589697 master-0 kubenswrapper[4430]: E1203 14:08:23.589662 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:23.589729 master-0 kubenswrapper[4430]: E1203 14:08:23.589701 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.589887 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.589993 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590068 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590108 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590171 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590273 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590351 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:23.590519 master-0 kubenswrapper[4430]: E1203 14:08:23.590479 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:23.590808 master-0 kubenswrapper[4430]: E1203 14:08:23.590549 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:23.590808 master-0 kubenswrapper[4430]: E1203 14:08:23.590631 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:23.590808 master-0 kubenswrapper[4430]: E1203 14:08:23.590692 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:23.590808 master-0 kubenswrapper[4430]: E1203 14:08:23.590781 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:23.590936 master-0 kubenswrapper[4430]: E1203 14:08:23.590860 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:23.590936 master-0 kubenswrapper[4430]: E1203 14:08:23.590920 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:23.597565 master-0 kubenswrapper[4430]: I1203 14:08:23.597491 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:23.597778 master-0 kubenswrapper[4430]: I1203 14:08:23.597748 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:24.038801 master-0 kubenswrapper[4430]: I1203 14:08:24.038751 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:24.041001 master-0 kubenswrapper[4430]: I1203 14:08:24.040851 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:24.041001 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:24.041001 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:24.041001 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:24.041518 master-0 kubenswrapper[4430]: I1203 14:08:24.040970 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:24.056518 master-0 kubenswrapper[4430]: I1203 14:08:24.051951 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"08acd077553f72d39a3338430ca8c622c61126e0810d50f76c2ab4bda2d6067f"} Dec 03 14:08:24.056518 master-0 kubenswrapper[4430]: I1203 14:08:24.054638 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"6294fba576b1de2ecb3035eff143115b1a07a2fa711867db163f33fa80b48bf3"} Dec 03 14:08:24.056518 master-0 kubenswrapper[4430]: I1203 14:08:24.054662 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"205a91247174fbcd49caa70233b8561e1b597ab7d8471618046e45f8d26ee607"} Dec 03 14:08:24.056518 master-0 kubenswrapper[4430]: I1203 14:08:24.056007 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"d17eedb1a1c6da03c53512f5e94fa46ea6cf769c08d5c7f4470e880abf335782"} Dec 03 14:08:24.058846 master-0 kubenswrapper[4430]: I1203 14:08:24.058808 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"b5128cf16e986912a19370e205039ae1d79f9d6befc7a242cf621d37e267ba26"} Dec 03 14:08:24.058846 master-0 kubenswrapper[4430]: I1203 14:08:24.058842 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"d2ca2ee49f1caf825f3be17bc4c4d0dd12b887ed189295e71da9c01631da67fc"} Dec 03 14:08:24.060349 master-0 kubenswrapper[4430]: I1203 14:08:24.060304 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="d0b962df8004724dd892c3623bab7db6773c671e1639de8604eb91c403982d54" exitCode=0 Dec 03 14:08:24.060444 master-0 kubenswrapper[4430]: I1203 14:08:24.060372 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"d0b962df8004724dd892c3623bab7db6773c671e1639de8604eb91c403982d54"} Dec 03 14:08:24.061660 master-0 kubenswrapper[4430]: I1203 14:08:24.061638 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:24.246163 master-0 kubenswrapper[4430]: I1203 14:08:24.246112 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:24.246285 master-0 kubenswrapper[4430]: I1203 14:08:24.246181 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:24.246285 master-0 kubenswrapper[4430]: I1203 14:08:24.246206 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:24.246285 master-0 kubenswrapper[4430]: I1203 14:08:24.246230 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:24.246380 master-0 kubenswrapper[4430]: I1203 14:08:24.246295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.246380 master-0 kubenswrapper[4430]: I1203 14:08:24.246317 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:24.246380 master-0 kubenswrapper[4430]: E1203 14:08:24.246337 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246380 master-0 kubenswrapper[4430]: E1203 14:08:24.246379 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246529 master-0 kubenswrapper[4430]: E1203 14:08:24.246442 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246529 master-0 kubenswrapper[4430]: E1203 14:08:24.246475 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246529 master-0 kubenswrapper[4430]: E1203 14:08:24.246487 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246529 master-0 kubenswrapper[4430]: E1203 14:08:24.246493 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.246470539 +0000 UTC m=+6.869384615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246529 master-0 kubenswrapper[4430]: E1203 14:08:24.246501 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246538 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246552 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246557 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246577 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246579 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246593 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246607 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: I1203 14:08:24.246352 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:24.246676 master-0 kubenswrapper[4430]: E1203 14:08:24.246607 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246509 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246718 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246772 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246784 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246720 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246840 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246580 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.246559061 +0000 UTC m=+6.869473197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246895 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.246881791 +0000 UTC m=+6.869796067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.246984 master-0 kubenswrapper[4430]: E1203 14:08:24.246918 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.246906961 +0000 UTC m=+6.869821037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: I1203 14:08:24.247009 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247044 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247062 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247070 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247084 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.247064556 +0000 UTC m=+6.869978632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247120 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.247113057 +0000 UTC m=+6.870027133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247134 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.247128538 +0000 UTC m=+6.870042814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247233 master-0 kubenswrapper[4430]: E1203 14:08:24.247152 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.247144628 +0000 UTC m=+6.870058914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247497 master-0 kubenswrapper[4430]: I1203 14:08:24.247262 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.247497 master-0 kubenswrapper[4430]: I1203 14:08:24.247296 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:24.247589 master-0 kubenswrapper[4430]: E1203 14:08:24.247507 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.247589 master-0 kubenswrapper[4430]: E1203 14:08:24.247523 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.247589 master-0 kubenswrapper[4430]: E1203 14:08:24.247535 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247589 master-0 kubenswrapper[4430]: E1203 14:08:24.247574 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.2475652 +0000 UTC m=+6.870479276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247716 master-0 kubenswrapper[4430]: E1203 14:08:24.247581 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:24.247716 master-0 kubenswrapper[4430]: E1203 14:08:24.247614 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.247716 master-0 kubenswrapper[4430]: E1203 14:08:24.247625 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247716 master-0 kubenswrapper[4430]: E1203 14:08:24.247678 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.247666033 +0000 UTC m=+6.870580109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.247846 master-0 kubenswrapper[4430]: I1203 14:08:24.247785 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:24.247955 master-0 kubenswrapper[4430]: E1203 14:08:24.247912 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.247955 master-0 kubenswrapper[4430]: E1203 14:08:24.247950 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248018 master-0 kubenswrapper[4430]: E1203 14:08:24.247961 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248018 master-0 kubenswrapper[4430]: I1203 14:08:24.247967 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:24.248078 master-0 kubenswrapper[4430]: E1203 14:08:24.248044 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.248031983 +0000 UTC m=+6.870946069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248113 master-0 kubenswrapper[4430]: E1203 14:08:24.248092 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248113 master-0 kubenswrapper[4430]: E1203 14:08:24.248107 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248170 master-0 kubenswrapper[4430]: E1203 14:08:24.248118 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248170 master-0 kubenswrapper[4430]: I1203 14:08:24.248133 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:24.248170 master-0 kubenswrapper[4430]: E1203 14:08:24.248169 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.248146437 +0000 UTC m=+6.871060513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248271 master-0 kubenswrapper[4430]: E1203 14:08:24.248260 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248305 master-0 kubenswrapper[4430]: E1203 14:08:24.248277 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248339 master-0 kubenswrapper[4430]: E1203 14:08:24.248307 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248370 master-0 kubenswrapper[4430]: E1203 14:08:24.248343 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.248333552 +0000 UTC m=+6.871247828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248406 master-0 kubenswrapper[4430]: E1203 14:08:24.248348 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248406 master-0 kubenswrapper[4430]: I1203 14:08:24.248304 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:24.248471 master-0 kubenswrapper[4430]: E1203 14:08:24.248408 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248471 master-0 kubenswrapper[4430]: E1203 14:08:24.248431 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248471 master-0 kubenswrapper[4430]: I1203 14:08:24.248451 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.248571 master-0 kubenswrapper[4430]: E1203 14:08:24.248484 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.248475176 +0000 UTC m=+6.871389462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248608 master-0 kubenswrapper[4430]: E1203 14:08:24.248570 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248608 master-0 kubenswrapper[4430]: E1203 14:08:24.248583 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248608 master-0 kubenswrapper[4430]: E1203 14:08:24.248590 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248695 master-0 kubenswrapper[4430]: I1203 14:08:24.248603 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:24.248695 master-0 kubenswrapper[4430]: E1203 14:08:24.248615 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.24860855 +0000 UTC m=+6.871522626 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248757 master-0 kubenswrapper[4430]: E1203 14:08:24.248721 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248757 master-0 kubenswrapper[4430]: I1203 14:08:24.248749 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:24.248901 master-0 kubenswrapper[4430]: E1203 14:08:24.248850 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:24.248901 master-0 kubenswrapper[4430]: E1203 14:08:24.248876 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248901 master-0 kubenswrapper[4430]: E1203 14:08:24.248890 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.248993 master-0 kubenswrapper[4430]: E1203 14:08:24.248741 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.248993 master-0 kubenswrapper[4430]: E1203 14:08:24.248956 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.249055 master-0 kubenswrapper[4430]: E1203 14:08:24.249015 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.249001251 +0000 UTC m=+6.871915337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.249055 master-0 kubenswrapper[4430]: E1203 14:08:24.249044 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.249035752 +0000 UTC m=+6.871950058 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.250092 master-0 kubenswrapper[4430]: I1203 14:08:24.250059 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:24.250216 master-0 kubenswrapper[4430]: E1203 14:08:24.250179 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:24.250216 master-0 kubenswrapper[4430]: E1203 14:08:24.250196 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:24.250282 master-0 kubenswrapper[4430]: E1203 14:08:24.250231 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:26.250222926 +0000 UTC m=+6.873137002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:24.353137 master-0 kubenswrapper[4430]: I1203 14:08:24.352697 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:24.353296 master-0 kubenswrapper[4430]: I1203 14:08:24.353151 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:24.353296 master-0 kubenswrapper[4430]: E1203 14:08:24.352944 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:24.353296 master-0 kubenswrapper[4430]: E1203 14:08:24.353278 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: E1203 14:08:24.353318 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.353293429 +0000 UTC m=+8.976207505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: E1203 14:08:24.353348 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.35333098 +0000 UTC m=+8.976245076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: I1203 14:08:24.353186 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: I1203 14:08:24.353389 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: E1203 14:08:24.353397 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:24.353449 master-0 kubenswrapper[4430]: I1203 14:08:24.353441 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.353722 master-0 kubenswrapper[4430]: I1203 14:08:24.353478 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:24.353722 master-0 kubenswrapper[4430]: E1203 14:08:24.353527 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:24.353722 master-0 kubenswrapper[4430]: E1203 14:08:24.353631 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:24.353852 master-0 kubenswrapper[4430]: E1203 14:08:24.353714 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:24.353852 master-0 kubenswrapper[4430]: E1203 14:08:24.353518 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.353499185 +0000 UTC m=+8.976413261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:24.353852 master-0 kubenswrapper[4430]: I1203 14:08:24.353839 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:24.353989 master-0 kubenswrapper[4430]: E1203 14:08:24.353869 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.353836855 +0000 UTC m=+8.976750931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:24.353989 master-0 kubenswrapper[4430]: E1203 14:08:24.353879 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:24.353989 master-0 kubenswrapper[4430]: E1203 14:08:24.353914 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.353904267 +0000 UTC m=+8.976818343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:24.353989 master-0 kubenswrapper[4430]: I1203 14:08:24.353955 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:24.353989 master-0 kubenswrapper[4430]: I1203 14:08:24.353990 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354030 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: I1203 14:08:24.354047 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354065 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354057741 +0000 UTC m=+8.976971817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354092 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354118 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354111393 +0000 UTC m=+8.977025469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: I1203 14:08:24.354116 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354130 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354125123 +0000 UTC m=+8.977039199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354157 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: I1203 14:08:24.354162 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354176 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354171064 +0000 UTC m=+8.977085140 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: I1203 14:08:24.354191 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354201 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: I1203 14:08:24.354238 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.354251 master-0 kubenswrapper[4430]: E1203 14:08:24.354256 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354273 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354281 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354274447 +0000 UTC m=+8.977188523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354315 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354318 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354308318 +0000 UTC m=+8.977222624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354348 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354340969 +0000 UTC m=+8.977255045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354350 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354370 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354383 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.35437542 +0000 UTC m=+8.977289496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354410 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354440 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354476 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354468133 +0000 UTC m=+8.977382439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354520 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354527 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354544 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354555 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354547525 +0000 UTC m=+8.977461601 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354578 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354611 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354617 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354658 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354669 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354691 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354684289 +0000 UTC m=+8.977598365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354705 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354710 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354727 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.35472136 +0000 UTC m=+8.977635436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354760 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354764 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354783 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354788 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354782722 +0000 UTC m=+8.977696798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354808 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354814 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354831 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354825813 +0000 UTC m=+8.977739889 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354852 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354862 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354896 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354878414 +0000 UTC m=+8.977792490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: I1203 14:08:24.354914 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:24.354892 master-0 kubenswrapper[4430]: E1203 14:08:24.354918 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.354967 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.354976 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355028 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355058 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355078 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.354931 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.354923826 +0000 UTC m=+8.977838132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355104 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355117 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355130 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355122791 +0000 UTC m=+8.978036857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355156 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355162 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355202 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355195784 +0000 UTC m=+8.978109860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355218 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355211284 +0000 UTC m=+8.978125360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355187 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355231 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355225704 +0000 UTC m=+8.978139780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355247 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355242765 +0000 UTC m=+8.978156841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355261 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355254905 +0000 UTC m=+8.978168981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355282 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355275846 +0000 UTC m=+8.978189922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355300 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355296066 +0000 UTC m=+8.978210142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355322 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355409 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355411 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355460 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355478 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355468221 +0000 UTC m=+8.978382297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355510 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355505 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355550 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355571 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355583 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355592 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355586485 +0000 UTC m=+8.978500561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355608 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355601195 +0000 UTC m=+8.978515271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355624 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355615805 +0000 UTC m=+8.978529881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355627 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355649 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355661 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355651226 +0000 UTC m=+8.978565522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355699 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355719128 +0000 UTC m=+8.978633204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355743 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355749 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355781 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.35577288 +0000 UTC m=+8.978687196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355792 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355805 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355816 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355809181 +0000 UTC m=+8.978723257 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355847 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355857 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355885 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355861262 +0000 UTC m=+8.978775558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355909 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355922 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355910624 +0000 UTC m=+8.978824920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355941 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.355932414 +0000 UTC m=+8.978846700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355961 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.355983 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356016 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.355983 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356064 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356051928 +0000 UTC m=+8.978966004 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356038 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356023 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356089 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356096 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356089259 +0000 UTC m=+8.979003335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356108 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356133 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356143 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356114 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356155 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356193 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356163 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356149701 +0000 UTC m=+8.979063787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356223 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356216853 +0000 UTC m=+8.979130929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356238 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356229943 +0000 UTC m=+8.979144019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356258 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356282 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356307 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356318 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356309525 +0000 UTC m=+8.979223591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356341 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356333166 +0000 UTC m=+8.979247242 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356358 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356361 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356380 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356409 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356432 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356385 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356378867 +0000 UTC m=+8.979292943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356458 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356452389 +0000 UTC m=+8.979366465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356478 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356498 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: I1203 14:08:24.356535 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356558 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356582 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356606 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:24.356636 master-0 kubenswrapper[4430]: E1203 14:08:24.356770 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.356536292 +0000 UTC m=+8.979450368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.356890 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.356947 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.356983 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357015 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357048 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357086 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357164 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357255 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357288 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357317 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357343 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357369 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357452 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357487 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357522 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357551 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357582 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357613 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357656 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357687 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357716 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357746 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357792 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357819 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357858 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357887 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357917 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357958 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.357989 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358055 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358142 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358194 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358273 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358301 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358332 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358379 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358406 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358461 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358495 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358526 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358553 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358700 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358740 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358772 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358798 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358835 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358861 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358901 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358930 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358954 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.358980 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359007 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359031 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359060 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359089 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359114 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359140 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359170 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359201 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359329 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359349 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359375 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359414 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359477 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359503 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359529 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359554 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359582 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359617 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359640 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359662 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359681 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359702 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359754 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359773 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359796 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359818 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359844 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359871 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359893 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359911 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359940 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359970 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.359990 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360010 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360032 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360052 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360072 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360091 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360112 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360145 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360167 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360187 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360208 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360229 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360249 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360281 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360312 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360351 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360373 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360392 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360412 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: I1203 14:08:24.360503 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360731 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360775 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.360764372 +0000 UTC m=+8.983678448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360838 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360861 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360880 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.360870055 +0000 UTC m=+8.983784131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360920 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.360905636 +0000 UTC m=+8.983819722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360927 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360961 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.360953827 +0000 UTC m=+8.983867993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.360993 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361042 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.360981208 +0000 UTC m=+8.983895384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361058 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361087 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36105583 +0000 UTC m=+8.983969906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361102 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.361096361 +0000 UTC m=+8.984010437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361122 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.361113842 +0000 UTC m=+8.984028008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361141 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.361132522 +0000 UTC m=+8.984046868 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361167 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361195 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:24.362618 master-0 kubenswrapper[4430]: E1203 14:08:24.361224 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361259 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361307 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361331 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361340 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361386 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361400 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361411 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361472 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361487 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361508 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361549 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361570 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361620 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361651 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361688 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361701 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361710 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361805 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361853 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361895 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361920 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361931 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361963 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362109 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362180 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362247 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362276 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362302 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362369 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362381 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362386 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362480 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362531 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362566 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362617 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362630 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362645 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362683 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362723 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362737 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363349 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363399 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363452 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363488 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363495 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363517 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363517 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363568 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363555 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363604 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363625 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363652 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362649 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363669 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363626 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361162 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.361152903 +0000 UTC m=+8.984067579 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363718 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363729 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363729 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363740 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.363721016 +0000 UTC m=+8.986635272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363761 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363771 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.363757827 +0000 UTC m=+8.986672173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363777 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363555 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363797 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.363786558 +0000 UTC m=+8.986700854 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363810 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363824 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361895 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363833 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363850 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363455 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363886 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363892 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363893 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363574 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361141 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361621 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362751 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363956 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363980 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.361259 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362788 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362790 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362802 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362842 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362844 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362851 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362890 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362893 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362897 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362931 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.362951 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363131 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363558 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364150 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364196 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363585 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363591 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363676 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363680 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364370 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363828 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.363816349 +0000 UTC m=+8.986730665 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363924 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.363999 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364442 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364397845 +0000 UTC m=+8.987312131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364052 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364467 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364458047 +0000 UTC m=+8.987372123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364486 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364479118 +0000 UTC m=+8.987393194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364163 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364504 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364495598 +0000 UTC m=+8.987409674 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364526 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364513709 +0000 UTC m=+8.987427785 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364544 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364536559 +0000 UTC m=+8.987450635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364560 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.3645529 +0000 UTC m=+8.987466976 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364574 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364575 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36456692 +0000 UTC m=+8.987480996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364629 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364640 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364623952 +0000 UTC m=+8.987538028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364662 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364657073 +0000 UTC m=+8.987571139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364679 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364672943 +0000 UTC m=+8.987587019 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364635 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364699 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364689664 +0000 UTC m=+8.987603730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364716 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364710134 +0000 UTC m=+8.987624210 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364730 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364725225 +0000 UTC m=+8.987639301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364702 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364745 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364738595 +0000 UTC m=+8.987652671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364904 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364840708 +0000 UTC m=+8.987754944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.364947 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364931971 +0000 UTC m=+8.987846057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365006 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.364991892 +0000 UTC m=+8.987905978 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365035 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365025923 +0000 UTC m=+8.987940009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365104 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365090725 +0000 UTC m=+8.988004811 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365159 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365122686 +0000 UTC m=+8.988036772 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365188 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365178868 +0000 UTC m=+8.988092954 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365249 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365208688 +0000 UTC m=+8.988122774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365277 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36526718 +0000 UTC m=+8.988181266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365332 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365320822 +0000 UTC m=+8.988234908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365361 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365351703 +0000 UTC m=+8.988265789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365408 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365377603 +0000 UTC m=+8.988291689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365478 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365466036 +0000 UTC m=+8.988380122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365533 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365498297 +0000 UTC m=+8.988412383 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365563 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365551988 +0000 UTC m=+8.988466074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365588 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365579039 +0000 UTC m=+8.988493125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365642 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36563156 +0000 UTC m=+8.988545646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365666 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365656361 +0000 UTC m=+8.988570457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365709 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365675612 +0000 UTC m=+8.988589698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365733 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365724263 +0000 UTC m=+8.988638349 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365753 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365744514 +0000 UTC m=+8.988658600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:24.371101 master-0 kubenswrapper[4430]: E1203 14:08:24.365808 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365797195 +0000 UTC m=+8.988711281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.365834 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365826086 +0000 UTC m=+8.988740172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.365888 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365876537 +0000 UTC m=+8.988790623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.365916 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.365903508 +0000 UTC m=+8.988817594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.365962 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36595125 +0000 UTC m=+8.988865336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.365989 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36598064 +0000 UTC m=+8.988894726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366070 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366054793 +0000 UTC m=+8.988969039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366133 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366092634 +0000 UTC m=+8.989006940 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366165 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366155115 +0000 UTC m=+8.989069201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366229 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366187416 +0000 UTC m=+8.989101722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366299 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366253718 +0000 UTC m=+8.989167804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366335 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36632165 +0000 UTC m=+8.989235926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366389 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366354021 +0000 UTC m=+8.989268107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366462 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366409153 +0000 UTC m=+8.989323239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366492 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366482785 +0000 UTC m=+8.989396871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366517 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366508865 +0000 UTC m=+8.989422951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366538 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366531996 +0000 UTC m=+8.989446082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366562 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366553847 +0000 UTC m=+8.989467933 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366616 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366605238 +0000 UTC m=+8.989519324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366643 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366634859 +0000 UTC m=+8.989548945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366667 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36665897 +0000 UTC m=+8.989573056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366694 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36668312 +0000 UTC m=+8.989597206 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366720 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366710841 +0000 UTC m=+8.989624927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366748 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366736942 +0000 UTC m=+8.989651028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366774 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366764693 +0000 UTC m=+8.989678779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366834 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366823894 +0000 UTC m=+8.989737990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366871 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366859495 +0000 UTC m=+8.989773791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366907 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366891426 +0000 UTC m=+8.989805722 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366941 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366928867 +0000 UTC m=+8.989843173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366972 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366960748 +0000 UTC m=+8.989875054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.366998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.366989299 +0000 UTC m=+8.989903385 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367021 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36701351 +0000 UTC m=+8.989927596 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367044 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367036931 +0000 UTC m=+8.989951017 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367070 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367062081 +0000 UTC m=+8.989976167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367094 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367085692 +0000 UTC m=+8.989999778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367125 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367118213 +0000 UTC m=+8.990032299 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367150 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367140993 +0000 UTC m=+8.990055079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367174 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367165214 +0000 UTC m=+8.990079300 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367235 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367219816 +0000 UTC m=+8.990133902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367269 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367259007 +0000 UTC m=+8.990173303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367295 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367286418 +0000 UTC m=+8.990200734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367321 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367310998 +0000 UTC m=+8.990225304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367346 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367338609 +0000 UTC m=+8.990252695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367372 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36736119 +0000 UTC m=+8.990275276 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367396 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36738749 +0000 UTC m=+8.990301576 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367433 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367411861 +0000 UTC m=+8.990325947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367472 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367460273 +0000 UTC m=+8.990374569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367503 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367490483 +0000 UTC m=+8.990404779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367536 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367524204 +0000 UTC m=+8.990438510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367569 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367555995 +0000 UTC m=+8.990470281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367596 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367588016 +0000 UTC m=+8.990502102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367626 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367613787 +0000 UTC m=+8.990528083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367655 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367645728 +0000 UTC m=+8.990560044 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367679 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367670329 +0000 UTC m=+8.990584415 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367703 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367693979 +0000 UTC m=+8.990608065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367725 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.36771648 +0000 UTC m=+8.990630566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367750 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367742011 +0000 UTC m=+8.990656097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367772 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367763961 +0000 UTC m=+8.990678047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367795 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367786952 +0000 UTC m=+8.990701038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367819 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367810163 +0000 UTC m=+8.990724249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367840 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367832813 +0000 UTC m=+8.990746899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:24.381206 master-0 kubenswrapper[4430]: E1203 14:08:24.367859 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:28.367853244 +0000 UTC m=+8.990767330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:24.409787 master-0 kubenswrapper[4430]: I1203 14:08:24.409701 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:24.415149 master-0 kubenswrapper[4430]: I1203 14:08:24.415096 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:24.583666 master-0 kubenswrapper[4430]: I1203 14:08:24.583606 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:24.583945 master-0 kubenswrapper[4430]: I1203 14:08:24.583729 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:24.583998 master-0 kubenswrapper[4430]: I1203 14:08:24.583951 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:24.583998 master-0 kubenswrapper[4430]: E1203 14:08:24.583946 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:24.584123 master-0 kubenswrapper[4430]: I1203 14:08:24.584001 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:24.584123 master-0 kubenswrapper[4430]: I1203 14:08:24.584014 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:24.584123 master-0 kubenswrapper[4430]: I1203 14:08:24.584107 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:24.584261 master-0 kubenswrapper[4430]: E1203 14:08:24.584102 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:24.584374 master-0 kubenswrapper[4430]: E1203 14:08:24.584307 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:24.584441 master-0 kubenswrapper[4430]: E1203 14:08:24.584329 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:24.584441 master-0 kubenswrapper[4430]: E1203 14:08:24.584393 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:24.584561 master-0 kubenswrapper[4430]: E1203 14:08:24.584474 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:24.675888 master-0 kubenswrapper[4430]: I1203 14:08:24.675795 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:24.675888 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:24.675888 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:24.675888 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:24.675888 master-0 kubenswrapper[4430]: I1203 14:08:24.675872 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:24.761733 master-0 kubenswrapper[4430]: E1203 14:08:24.761637 4430 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:25.067692 master-0 kubenswrapper[4430]: I1203 14:08:25.067615 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"4f4048bb8a9818d0d6d08b3a0c4266128e22b30fad60e11c85437aeb1c539071"} Dec 03 14:08:25.067692 master-0 kubenswrapper[4430]: I1203 14:08:25.067676 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"437f7ec09cdfbaaf0f2d8c750d87de41470c054124a451b74aa361e784f32913"} Dec 03 14:08:25.067692 master-0 kubenswrapper[4430]: I1203 14:08:25.067686 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"91af6ef7f9da44b2fd5666c3d41bbac57f3b1b2e9b53696653ba5f67acb275c2"} Dec 03 14:08:25.067692 master-0 kubenswrapper[4430]: I1203 14:08:25.067696 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"9d0e540b9c2e29f516a9bcf74d0ff9cb2c9f714e5085f646fe71482827081d16"} Dec 03 14:08:25.069912 master-0 kubenswrapper[4430]: I1203 14:08:25.069848 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="56acd5e96ee7f91a6042d64d7f2020688eeef572076f93ecef2df2eecbf25577" exitCode=0 Dec 03 14:08:25.069912 master-0 kubenswrapper[4430]: I1203 14:08:25.069917 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"56acd5e96ee7f91a6042d64d7f2020688eeef572076f93ecef2df2eecbf25577"} Dec 03 14:08:25.071610 master-0 kubenswrapper[4430]: I1203 14:08:25.071528 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/1.log" Dec 03 14:08:25.071876 master-0 kubenswrapper[4430]: I1203 14:08:25.071853 4430 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="b5128cf16e986912a19370e205039ae1d79f9d6befc7a242cf621d37e267ba26" exitCode=255 Dec 03 14:08:25.074353 master-0 kubenswrapper[4430]: I1203 14:08:25.073536 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:25.074353 master-0 kubenswrapper[4430]: I1203 14:08:25.072388 4430 scope.go:117] "RemoveContainer" containerID="b5128cf16e986912a19370e205039ae1d79f9d6befc7a242cf621d37e267ba26" Dec 03 14:08:25.074353 master-0 kubenswrapper[4430]: I1203 14:08:25.072186 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"b5128cf16e986912a19370e205039ae1d79f9d6befc7a242cf621d37e267ba26"} Dec 03 14:08:25.088124 master-0 kubenswrapper[4430]: I1203 14:08:25.088066 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:25.088365 master-0 kubenswrapper[4430]: E1203 14:08:25.088322 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:25.088365 master-0 kubenswrapper[4430]: E1203 14:08:25.088359 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.088483 master-0 kubenswrapper[4430]: E1203 14:08:25.088373 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.088608 master-0 kubenswrapper[4430]: E1203 14:08:25.088563 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.088535006 +0000 UTC m=+9.711449082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089076 master-0 kubenswrapper[4430]: I1203 14:08:25.089035 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:25.089147 master-0 kubenswrapper[4430]: I1203 14:08:25.089121 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:25.089197 master-0 kubenswrapper[4430]: I1203 14:08:25.089154 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:25.089197 master-0 kubenswrapper[4430]: I1203 14:08:25.089180 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:25.089296 master-0 kubenswrapper[4430]: I1203 14:08:25.089218 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:25.089347 master-0 kubenswrapper[4430]: I1203 14:08:25.089327 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:25.089347 master-0 kubenswrapper[4430]: E1203 14:08:25.089330 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: E1203 14:08:25.089338 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: E1203 14:08:25.089355 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: E1203 14:08:25.089366 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: E1203 14:08:25.089374 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: E1203 14:08:25.089379 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089443 master-0 kubenswrapper[4430]: I1203 14:08:25.089431 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089498 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089510 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089414381 +0000 UTC m=+9.712328457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089533 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089524574 +0000 UTC m=+9.712438640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089555 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089567 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089567 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089575 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089582 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089593 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089604 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089597006 +0000 UTC m=+9.712511082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089509 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089631 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089641 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089512 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089658 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.089678 master-0 kubenswrapper[4430]: E1203 14:08:25.089631 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089613457 +0000 UTC m=+9.712527693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: E1203 14:08:25.089764 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089752681 +0000 UTC m=+9.712666877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: E1203 14:08:25.089784 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.089777221 +0000 UTC m=+9.712691507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.089911 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.089943 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.089968 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.089995 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.090046 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.090145 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.090181 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.090240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:25.090365 master-0 kubenswrapper[4430]: I1203 14:08:25.090333 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090464 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090538 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090682 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090744 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:25.090966 master-0 kubenswrapper[4430]: I1203 14:08:25.090817 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:25.091192 master-0 kubenswrapper[4430]: I1203 14:08:25.090975 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:25.091192 master-0 kubenswrapper[4430]: I1203 14:08:25.091109 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:25.091275 master-0 kubenswrapper[4430]: I1203 14:08:25.091209 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:25.091275 master-0 kubenswrapper[4430]: I1203 14:08:25.091248 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:25.091358 master-0 kubenswrapper[4430]: I1203 14:08:25.091283 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:25.091358 master-0 kubenswrapper[4430]: I1203 14:08:25.091332 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:25.091489 master-0 kubenswrapper[4430]: I1203 14:08:25.091391 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:25.091489 master-0 kubenswrapper[4430]: I1203 14:08:25.091466 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091571 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091588 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091597 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091625 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.091617784 +0000 UTC m=+9.714532040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091725 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091750 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091766 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091768 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091790 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091801 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091809 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.091794769 +0000 UTC m=+9.714709015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091829 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091849 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091859 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091899 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091909 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091938 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091943 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091963 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091966 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091995 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091997 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092010 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092012 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092015 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092025 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092029 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092039 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092041 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092061 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.091977 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092071 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092084 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092085 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092095 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092106 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092102 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092146 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:25.092099 master-0 kubenswrapper[4430]: E1203 14:08:25.092158 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092152 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092200 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092218 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091822 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092288 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092299 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092162 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092337 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092091 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092371 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091950 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091916 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092451 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091836 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.09182388 +0000 UTC m=+9.714738166 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091762 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092522 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092496209 +0000 UTC m=+9.715410285 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092533 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092546 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092547 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.09253506 +0000 UTC m=+9.715449146 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092568 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.09255936 +0000 UTC m=+9.715473446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091915 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092582 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092575771 +0000 UTC m=+9.715489837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092586 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092602 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092603 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092595581 +0000 UTC m=+9.715509657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092617 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092612802 +0000 UTC m=+9.715526878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091976 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092634 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092626552 +0000 UTC m=+9.715540628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092635 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092651 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092643063 +0000 UTC m=+9.715557139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092047 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092668 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092660043 +0000 UTC m=+9.715574119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092690 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092683834 +0000 UTC m=+9.715597910 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092044 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092702 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092696954 +0000 UTC m=+9.715611020 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092708 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092717 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092709605 +0000 UTC m=+9.715623681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092074 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092735 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092725925 +0000 UTC m=+9.715640001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092751 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092746156 +0000 UTC m=+9.715660232 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092141 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092765 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092759346 +0000 UTC m=+9.715673422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092773 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092778 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092773177 +0000 UTC m=+9.715687253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092791 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092785967 +0000 UTC m=+9.715700043 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092178 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092806 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092799487 +0000 UTC m=+9.715713563 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092834 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092824118 +0000 UTC m=+9.715738194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.091955 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092863 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092894 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092023 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092955 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092926 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092918141 +0000 UTC m=+9.715832217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.093963 master-0 kubenswrapper[4430]: E1203 14:08:25.092999 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:29.092992533 +0000 UTC m=+9.715906609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.589996 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590387 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590088 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590438 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590489 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590142 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590198 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590238 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590205 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590255 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590254 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590555 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590285 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590571 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: E1203 14:08:25.590598 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590623 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590627 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590318 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590350 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590355 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590665 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590669 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590362 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590680 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590391 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590376 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590394 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590408 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590129 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590443 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590128 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590273 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590273 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590568 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590296 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590591 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590809 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590600 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:25.590780 master-0 kubenswrapper[4430]: I1203 14:08:25.590604 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.590829 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590612 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590872 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590617 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590313 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590638 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590654 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590660 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590371 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.590973 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590685 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590387 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590369 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590691 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590712 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590577 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591066 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590847 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590850 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590629 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590309 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.591005 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590693 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: I1203 14:08:25.590599 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591239 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591329 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591483 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591572 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591644 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591711 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591844 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.591929 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592004 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592115 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592206 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592308 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592378 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592468 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592526 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592574 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592620 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592708 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592825 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.592917 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593001 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593064 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593123 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593227 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593369 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593448 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593535 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593631 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:25.593642 master-0 kubenswrapper[4430]: E1203 14:08:25.593748 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.593823 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.593894 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.593997 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594061 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594142 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594221 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594305 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594388 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594470 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594550 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594726 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594797 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594886 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.594964 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595040 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595129 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595240 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595314 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595382 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595508 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595577 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:25.595685 master-0 kubenswrapper[4430]: E1203 14:08:25.595712 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:25.596778 master-0 kubenswrapper[4430]: E1203 14:08:25.595790 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:25.596778 master-0 kubenswrapper[4430]: E1203 14:08:25.595856 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:25.596778 master-0 kubenswrapper[4430]: E1203 14:08:25.595946 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:25.596778 master-0 kubenswrapper[4430]: E1203 14:08:25.596009 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:25.630844 master-0 kubenswrapper[4430]: I1203 14:08:25.630754 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:25.675689 master-0 kubenswrapper[4430]: I1203 14:08:25.675584 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:25.675689 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:25.675689 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:25.675689 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:25.676314 master-0 kubenswrapper[4430]: I1203 14:08:25.675720 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:25.713304 master-0 kubenswrapper[4430]: I1203 14:08:25.713212 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:25.717768 master-0 kubenswrapper[4430]: I1203 14:08:25.717702 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.721788 4430 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.722003 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.722860 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723120 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723192 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723241 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723308 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723362 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723467 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723482 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723488 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723530 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723569 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723597 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723612 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.723731 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723792 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723793 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723811 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723824 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723826 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723835 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723791 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723892 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723900 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723907 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.723867076 +0000 UTC m=+11.346781152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.723957 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724218 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.724235 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724242 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724265 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724268 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724251157 +0000 UTC m=+11.347165233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724287 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724280958 +0000 UTC m=+11.347195024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724277 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724321 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724337 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724345 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724380 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724365 +0000 UTC m=+11.347279076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.724438 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.724473 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724523 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724533 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724540 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724588 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724608 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724619 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724613 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724493434 +0000 UTC m=+11.347407510 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724818 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724796212 +0000 UTC m=+11.347710288 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: E1203 14:08:26.724840 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.724833183 +0000 UTC m=+11.347747259 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.724899 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.724922 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.725007 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:26.724375 master-0 kubenswrapper[4430]: I1203 14:08:26.725070 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.725515 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.725541 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725625 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725635 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725644 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725658 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725678 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725689 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725734 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725741 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.725724358 +0000 UTC m=+11.348638624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725746 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725768 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725868 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.725851812 +0000 UTC m=+11.348766078 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.725893 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.725882863 +0000 UTC m=+11.348797159 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.726395 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.726748 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.726772 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.726897 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.726884871 +0000 UTC m=+11.349798947 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727303 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727338 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727371 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727453 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727523 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727534 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727556 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: I1203 14:08:26.727583 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727560 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727650 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727649 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727708 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.727685734 +0000 UTC m=+11.350599990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727708 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727540 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727737 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727743 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727758 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727761 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.727754406 +0000 UTC m=+11.350668482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727759 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727795 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727808 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727768 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727627 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727856 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727857 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.727825348 +0000 UTC m=+11.350739424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727865 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727892 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.72787942 +0000 UTC m=+11.350793706 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727918 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.72790795 +0000 UTC m=+11.350822236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727938 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727964 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727976 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.727942 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.727928631 +0000 UTC m=+11.350842707 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.733654 master-0 kubenswrapper[4430]: E1203 14:08:26.728029 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:30.728014583 +0000 UTC m=+11.350928859 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:26.736497 master-0 kubenswrapper[4430]: I1203 14:08:26.736452 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:26.736497 master-0 kubenswrapper[4430]: I1203 14:08:26.736483 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736597 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736606 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736634 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736675 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736684 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736674 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:26.736798 master-0 kubenswrapper[4430]: I1203 14:08:26.736767 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.736987 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737005 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737039 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737040 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737056 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737068 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737106 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737097 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737113 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737121 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: E1203 14:08:26.737127 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737089 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737150 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: E1203 14:08:26.737260 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737281 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737303 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737287 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:26.737452 master-0 kubenswrapper[4430]: I1203 14:08:26.737387 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737474 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737516 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737655 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737770 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737862 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.737942 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:26.738028 master-0 kubenswrapper[4430]: E1203 14:08:26.738024 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:26.738249 master-0 kubenswrapper[4430]: E1203 14:08:26.738173 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:26.738249 master-0 kubenswrapper[4430]: E1203 14:08:26.738213 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:26.738323 master-0 kubenswrapper[4430]: E1203 14:08:26.738283 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:26.738359 master-0 kubenswrapper[4430]: E1203 14:08:26.738339 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:26.738476 master-0 kubenswrapper[4430]: E1203 14:08:26.738451 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:26.738549 master-0 kubenswrapper[4430]: E1203 14:08:26.738527 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:26.738590 master-0 kubenswrapper[4430]: E1203 14:08:26.738560 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:26.738637 master-0 kubenswrapper[4430]: E1203 14:08:26.738624 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:26.738706 master-0 kubenswrapper[4430]: E1203 14:08:26.738684 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:26.738775 master-0 kubenswrapper[4430]: E1203 14:08:26.738756 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:26.738925 master-0 kubenswrapper[4430]: E1203 14:08:26.738894 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:26.739082 master-0 kubenswrapper[4430]: E1203 14:08:26.738973 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:26.739082 master-0 kubenswrapper[4430]: E1203 14:08:26.739036 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:26.739214 master-0 kubenswrapper[4430]: E1203 14:08:26.739180 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:26.739346 master-0 kubenswrapper[4430]: E1203 14:08:26.739306 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:26.739447 master-0 kubenswrapper[4430]: E1203 14:08:26.739408 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:27.447702 master-0 kubenswrapper[4430]: I1203 14:08:27.447521 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:27.453297 master-0 kubenswrapper[4430]: I1203 14:08:27.453230 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:27.584550 master-0 kubenswrapper[4430]: I1203 14:08:27.584377 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:27.584896 master-0 kubenswrapper[4430]: I1203 14:08:27.584390 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:27.585730 master-0 kubenswrapper[4430]: E1203 14:08:27.585674 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:27.585809 master-0 kubenswrapper[4430]: I1203 14:08:27.585772 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:27.585859 master-0 kubenswrapper[4430]: I1203 14:08:27.585817 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:27.585859 master-0 kubenswrapper[4430]: I1203 14:08:27.585836 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:27.585946 master-0 kubenswrapper[4430]: I1203 14:08:27.585862 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:27.585946 master-0 kubenswrapper[4430]: I1203 14:08:27.585891 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:27.585946 master-0 kubenswrapper[4430]: I1203 14:08:27.585917 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:27.585946 master-0 kubenswrapper[4430]: I1203 14:08:27.585942 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.585965 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.585988 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586014 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586044 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586071 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586092 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586115 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:27.586157 master-0 kubenswrapper[4430]: I1203 14:08:27.586141 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586169 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586194 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: E1203 14:08:27.586255 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586294 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586325 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586347 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586370 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586392 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586438 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586469 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586492 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:27.586514 master-0 kubenswrapper[4430]: I1203 14:08:27.586518 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: E1203 14:08:27.586564 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586590 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586619 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586644 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586669 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586702 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: E1203 14:08:27.586750 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: I1203 14:08:27.586793 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: E1203 14:08:27.586851 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: E1203 14:08:27.586906 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:27.587061 master-0 kubenswrapper[4430]: E1203 14:08:27.586973 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587101 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587149 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587215 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587269 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587343 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587391 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587448 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587488 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:27.587579 master-0 kubenswrapper[4430]: E1203 14:08:27.587530 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587584 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587656 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587741 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587818 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587888 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587952 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:27.588069 master-0 kubenswrapper[4430]: E1203 14:08:27.587997 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:27.588411 master-0 kubenswrapper[4430]: E1203 14:08:27.588084 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:27.588411 master-0 kubenswrapper[4430]: E1203 14:08:27.588235 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:27.588411 master-0 kubenswrapper[4430]: E1203 14:08:27.588310 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:27.588411 master-0 kubenswrapper[4430]: E1203 14:08:27.588372 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:27.588411 master-0 kubenswrapper[4430]: E1203 14:08:27.588412 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:27.588654 master-0 kubenswrapper[4430]: E1203 14:08:27.588497 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:27.588654 master-0 kubenswrapper[4430]: E1203 14:08:27.588581 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:27.588760 master-0 kubenswrapper[4430]: E1203 14:08:27.588661 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:27.588760 master-0 kubenswrapper[4430]: E1203 14:08:27.588734 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:27.588870 master-0 kubenswrapper[4430]: E1203 14:08:27.588793 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:27.588870 master-0 kubenswrapper[4430]: E1203 14:08:27.588850 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:27.673162 master-0 kubenswrapper[4430]: I1203 14:08:27.673109 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:27.673162 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:27.673162 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:27.673162 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:27.673162 master-0 kubenswrapper[4430]: I1203 14:08:27.673165 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:27.742382 master-0 kubenswrapper[4430]: I1203 14:08:27.742325 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/2.log" Dec 03 14:08:27.742921 master-0 kubenswrapper[4430]: I1203 14:08:27.742797 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/1.log" Dec 03 14:08:27.743763 master-0 kubenswrapper[4430]: I1203 14:08:27.743729 4430 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e" exitCode=255 Dec 03 14:08:27.743839 master-0 kubenswrapper[4430]: I1203 14:08:27.743813 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e"} Dec 03 14:08:27.744001 master-0 kubenswrapper[4430]: I1203 14:08:27.743973 4430 scope.go:117] "RemoveContainer" containerID="b5128cf16e986912a19370e205039ae1d79f9d6befc7a242cf621d37e267ba26" Dec 03 14:08:27.746909 master-0 kubenswrapper[4430]: I1203 14:08:27.746844 4430 scope.go:117] "RemoveContainer" containerID="91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e" Dec 03 14:08:27.747400 master-0 kubenswrapper[4430]: E1203 14:08:27.747367 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46)\"" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" Dec 03 14:08:27.753507 master-0 kubenswrapper[4430]: I1203 14:08:27.753463 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="bf8a107de0e49056c02cd1b163d66fa929c5a1def78064c138b37e5d5a4fd7cb" exitCode=0 Dec 03 14:08:27.753619 master-0 kubenswrapper[4430]: I1203 14:08:27.753533 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"bf8a107de0e49056c02cd1b163d66fa929c5a1def78064c138b37e5d5a4fd7cb"} Dec 03 14:08:27.757342 master-0 kubenswrapper[4430]: I1203 14:08:27.757293 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"b52e70a2fd68cf19fc245924194323a013951ec6f99b3a5e99b3a9580cd13ee0"} Dec 03 14:08:27.757453 master-0 kubenswrapper[4430]: I1203 14:08:27.757349 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"7df03249c6c36a8bedc8b2855a0ac7732b8b760292170d049984dc2323c4c36c"} Dec 03 14:08:27.762145 master-0 kubenswrapper[4430]: I1203 14:08:27.762116 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:28.420801 master-0 kubenswrapper[4430]: I1203 14:08:28.420687 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:28.420801 master-0 kubenswrapper[4430]: I1203 14:08:28.420745 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.420801 master-0 kubenswrapper[4430]: I1203 14:08:28.420806 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420879 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420903 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420944 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420975 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.420999 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421017 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421037 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421058 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421080 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421098 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421116 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421138 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421160 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421177 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421203 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421223 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421242 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421300 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421321 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421339 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421357 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421375 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421393 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421413 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421450 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421469 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421516 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421537 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421601 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421620 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421643 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.421578 master-0 kubenswrapper[4430]: I1203 14:08:28.421665 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421695 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421714 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421732 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421752 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421779 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421798 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421873 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421895 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421923 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421944 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421962 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.421987 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422010 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422041 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422061 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422082 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422100 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422121 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422141 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422168 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422185 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422204 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422221 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422242 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422261 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422286 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422304 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422326 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422344 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422372 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422393 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422411 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422466 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422489 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422515 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422537 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422564 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422592 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.426005 master-0 kubenswrapper[4430]: I1203 14:08:28.422618 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:28.432560 master-0 kubenswrapper[4430]: E1203 14:08:28.432481 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:28.432560 master-0 kubenswrapper[4430]: E1203 14:08:28.432504 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:28.432560 master-0 kubenswrapper[4430]: E1203 14:08:28.432551 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432539 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432577 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432616 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432641 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432679 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432691 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432734 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432737 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432739 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432777 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432807 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432857 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432868 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432594 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.432569544 +0000 UTC m=+17.055483620 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:28.432860 master-0 kubenswrapper[4430]: E1203 14:08:28.432481 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432911 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.432896813 +0000 UTC m=+17.055810889 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432647 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432942 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432779 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432923 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.432917374 +0000 UTC m=+17.055831450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432983 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.432970575 +0000 UTC m=+17.055884641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432994 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432999 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.432991466 +0000 UTC m=+17.055905542 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433013 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433006206 +0000 UTC m=+17.055920282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433022 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433036 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433026337 +0000 UTC m=+17.055940413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433047 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433042158 +0000 UTC m=+17.055956234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433050 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433059 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433051708 +0000 UTC m=+17.055965784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433069 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433064408 +0000 UTC m=+17.055978484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433080 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433074718 +0000 UTC m=+17.055988794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433082 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433092 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432986 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433092 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433087269 +0000 UTC m=+17.056001345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433137 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433167 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433175 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433194 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433215 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433230 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433234 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433139 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43312882 +0000 UTC m=+17.056042896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433271 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433256964 +0000 UTC m=+17.056171040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433273 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433282 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433275564 +0000 UTC m=+17.056189640 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433285 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433294 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433288465 +0000 UTC m=+17.056202541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432890 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433306 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433299895 +0000 UTC m=+17.056213971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433321 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433314525 +0000 UTC m=+17.056228601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433335 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433329396 +0000 UTC m=+17.056243472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433340 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433346 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433340586 +0000 UTC m=+17.056254662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433363 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433352816 +0000 UTC m=+17.056266892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433373 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433384 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433369487 +0000 UTC m=+17.056283563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433442 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433447 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433499 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433501 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433394 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433524 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433532 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433136 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.432967 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433565 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433502 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433592 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433607 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433617 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433619 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433638 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433442 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433407458 +0000 UTC m=+17.056321534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433673 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433553 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433708 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433676 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433677 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433669245 +0000 UTC m=+17.056583321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433753 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433762 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433745 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433737547 +0000 UTC m=+17.056651623 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433801 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433787799 +0000 UTC m=+17.056701875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433812 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433806719 +0000 UTC m=+17.056720795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433817 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433821 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43381731 +0000 UTC m=+17.056731386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433825 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433835 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43382652 +0000 UTC m=+17.056740596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433717 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433843 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433813 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433847 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43383998 +0000 UTC m=+17.056754046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433879 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433867181 +0000 UTC m=+17.056781257 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433892 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433884792 +0000 UTC m=+17.056798868 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433891 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433906 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433907 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433898142 +0000 UTC m=+17.056812228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433921 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433914832 +0000 UTC m=+17.056828908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433932 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433926963 +0000 UTC m=+17.056841039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433944 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433936803 +0000 UTC m=+17.056850879 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433947 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433956 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433950663 +0000 UTC m=+17.056864739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433949 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433972 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433962964 +0000 UTC m=+17.056877040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433985 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433978784 +0000 UTC m=+17.056892860 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433986 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.433997 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.433991825 +0000 UTC m=+17.056905901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434015 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434003775 +0000 UTC m=+17.056917851 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434022 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434027 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434021605 +0000 UTC m=+17.056935681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434039 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434031996 +0000 UTC m=+17.056946072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434049 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434044106 +0000 UTC m=+17.056958182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434248 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434252 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434247702 +0000 UTC m=+17.057161778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434260 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434266 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434259502 +0000 UTC m=+17.057173578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434283 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434274163 +0000 UTC m=+17.057188239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434302 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434289323 +0000 UTC m=+17.057203399 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434304 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434309 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434314 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434307974 +0000 UTC m=+17.057222050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434332 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434321904 +0000 UTC m=+17.057235970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434343 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434337634 +0000 UTC m=+17.057251710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434353 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434347865 +0000 UTC m=+17.057261941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434366 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434362065 +0000 UTC m=+17.057276141 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434364 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434373 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434376 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434371795 +0000 UTC m=+17.057285871 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434392 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434385216 +0000 UTC m=+17.057299292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434402 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434397006 +0000 UTC m=+17.057311082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:28.435616 master-0 kubenswrapper[4430]: E1203 14:08:28.434434 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434407406 +0000 UTC m=+17.057321482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434447 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434442917 +0000 UTC m=+17.057356993 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434449 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434457 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434453088 +0000 UTC m=+17.057367164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434470 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434463348 +0000 UTC m=+17.057377424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434479 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434475258 +0000 UTC m=+17.057389334 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434482 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434489 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434485319 +0000 UTC m=+17.057399395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434501 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434497209 +0000 UTC m=+17.057411285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434511 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434506689 +0000 UTC m=+17.057420765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434519 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434523 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43451632 +0000 UTC m=+17.057430396 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434534 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43453009 +0000 UTC m=+17.057444166 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434542 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434543 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.4345392 +0000 UTC m=+17.057453276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434557 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434553171 +0000 UTC m=+17.057467247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434568 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434562761 +0000 UTC m=+17.057476837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434581 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434573731 +0000 UTC m=+17.057487797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434591 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434586131 +0000 UTC m=+17.057500207 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434604 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434597572 +0000 UTC m=+17.057511648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434616 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434611472 +0000 UTC m=+17.057525548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434626 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434621743 +0000 UTC m=+17.057535819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434701 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434694875 +0000 UTC m=+17.057608951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.434751 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.434625 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435320 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435325 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.434846449 +0000 UTC m=+17.057760525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435514 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.435485387 +0000 UTC m=+17.058399473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.435512 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435532 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.435588 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435626 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435659 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.435643232 +0000 UTC m=+17.058557318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435685 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.435676713 +0000 UTC m=+17.058590789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.435727 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.435780 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435851 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435924 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.435991 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.435982411 +0000 UTC m=+17.058896487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436015 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436007232 +0000 UTC m=+17.058921308 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.435968 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436058 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436098 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436211 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436250 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436240239 +0000 UTC m=+17.059154315 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436234 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436299 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436336 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436372 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436436 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436458 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436518 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436540 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436585 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436597 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436612 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436639 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436598 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436677 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436770 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436730612 +0000 UTC m=+17.059644698 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436806 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436792854 +0000 UTC m=+17.059706940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.436762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436882 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436849446 +0000 UTC m=+17.059763522 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436806 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436907 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436896307 +0000 UTC m=+17.059810383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436930 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436919388 +0000 UTC m=+17.059833464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436960 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436941898 +0000 UTC m=+17.059855974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436975 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.436968369 +0000 UTC m=+17.059882445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.436994 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.43698356 +0000 UTC m=+17.059897636 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.437263 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437314 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437361 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.437466 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.437511 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.437500434 +0000 UTC m=+17.060414510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.437539 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.437525195 +0000 UTC m=+17.060439271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.437552 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.437547366 +0000 UTC m=+17.060461442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437619 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437674 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437745 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437798 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.437837 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438381 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438522 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438561 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.438547504 +0000 UTC m=+17.061461580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438397 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438436 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438612 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438621 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.438612356 +0000 UTC m=+17.061526652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438675 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438692 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438732 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.438722089 +0000 UTC m=+17.061636405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438757 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438792 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438817 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438857 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.438834912 +0000 UTC m=+17.061749028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438877 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438913 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.438957 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438919 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438479 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438953 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439055 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438514 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.438990 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439005 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.438993967 +0000 UTC m=+17.061908263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439171 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439157762 +0000 UTC m=+17.062071838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439200 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439237 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439256 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439268 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439301 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439349 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439356 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: I1203 14:08:28.439389 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439396 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:28.456928 master-0 kubenswrapper[4430]: E1203 14:08:28.439471 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439408 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439386788 +0000 UTC m=+17.062301064 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439491 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439508 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439498381 +0000 UTC m=+17.062412697 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439548 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439526102 +0000 UTC m=+17.062440238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439556 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439573 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439562693 +0000 UTC m=+17.062476989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439635 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439622015 +0000 UTC m=+17.062536301 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439657 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439647846 +0000 UTC m=+17.062562162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439710 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439701817 +0000 UTC m=+17.062615973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439728 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439721028 +0000 UTC m=+17.062635324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439752 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439742948 +0000 UTC m=+17.062657124 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439913 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439899393 +0000 UTC m=+17.062813629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439935 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439926973 +0000 UTC m=+17.062841289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.439949 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.439942924 +0000 UTC m=+17.062857240 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.439992 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440054 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440101 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440134 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440167 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440227 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440291 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440336 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440361 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440387 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440447 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440481 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440507 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440533 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440562 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440589 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440619 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440648 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440676 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440702 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440749 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440776 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440806 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440862 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440895 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440955 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.440998 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441029 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441072 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441097 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441128 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441156 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441182 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441208 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441250 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441323 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441350 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441390 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441432 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441458 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441485 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441509 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441573 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441613 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441642 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441668 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441694 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: I1203 14:08:28.441723 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.441876 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.441912 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.44190255 +0000 UTC m=+17.064816626 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.441959 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.441989 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.441979502 +0000 UTC m=+17.064893578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442033 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442062 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442052854 +0000 UTC m=+17.064966930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442105 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442129 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442121516 +0000 UTC m=+17.065035832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442173 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442213 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442185718 +0000 UTC m=+17.065099794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442236 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442255 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.44224942 +0000 UTC m=+17.065163496 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442277 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442293 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442288411 +0000 UTC m=+17.065202487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442313 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442332 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442326672 +0000 UTC m=+17.065240748 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442365 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442382 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442377233 +0000 UTC m=+17.065291309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442411 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442447 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442441865 +0000 UTC m=+17.065355941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442486 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442489 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442507 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442499997 +0000 UTC m=+17.065414063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442506 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442516 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442543 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442566 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442546388 +0000 UTC m=+17.065460464 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442579 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442587 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442578599 +0000 UTC m=+17.065492675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442600 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442604 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442600 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442594929 +0000 UTC m=+17.065509005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442632 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442631 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442632 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442668 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442686 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442699 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442756 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442778 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442802 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442780 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442837 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442704 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442858 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442887 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442661 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442907 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442913 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442911 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442932 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442707 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442956 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442662 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.442632441 +0000 UTC m=+17.065546517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442969 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442999 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.44298139 +0000 UTC m=+17.065895506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443002 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443019 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443025 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443012971 +0000 UTC m=+17.065927077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.442739 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443036 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443051 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443039692 +0000 UTC m=+17.065953798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:28.462641 master-0 kubenswrapper[4430]: E1203 14:08:28.443069 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442745 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443075 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443065113 +0000 UTC m=+17.065979229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442819 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443112 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443097014 +0000 UTC m=+17.066011120 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442850 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443135 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443125195 +0000 UTC m=+17.066039271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442849 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442724 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.442976 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443154 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443147495 +0000 UTC m=+17.066061571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443212 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443202417 +0000 UTC m=+17.066116493 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443225 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443219137 +0000 UTC m=+17.066133203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443239 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443232878 +0000 UTC m=+17.066146954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443252 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443245598 +0000 UTC m=+17.066159674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443267 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443261058 +0000 UTC m=+17.066175134 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443280 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443275709 +0000 UTC m=+17.066189775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443292 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443286039 +0000 UTC m=+17.066200115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443303 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443298639 +0000 UTC m=+17.066212715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443315 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.44331012 +0000 UTC m=+17.066224186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443328 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.44332075 +0000 UTC m=+17.066234816 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443340 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443334461 +0000 UTC m=+17.066248537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443349 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443345371 +0000 UTC m=+17.066259447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443363 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443356931 +0000 UTC m=+17.066271007 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443372 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443368041 +0000 UTC m=+17.066282117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443383 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443377162 +0000 UTC m=+17.066291238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443396 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443390172 +0000 UTC m=+17.066304248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443409 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443402472 +0000 UTC m=+17.066316548 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443437 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443432893 +0000 UTC m=+17.066346969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443450 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443443784 +0000 UTC m=+17.066357860 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443462 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443455894 +0000 UTC m=+17.066369970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443475 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443470344 +0000 UTC m=+17.066384420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443488 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443480835 +0000 UTC m=+17.066394901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443499 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443494425 +0000 UTC m=+17.066408501 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443509 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443504145 +0000 UTC m=+17.066418221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443525 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443521456 +0000 UTC m=+17.066435532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443537 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443531726 +0000 UTC m=+17.066445802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443554 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443546787 +0000 UTC m=+17.066460863 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443565 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443560037 +0000 UTC m=+17.066474113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:28.468040 master-0 kubenswrapper[4430]: E1203 14:08:28.443581 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:36.443576857 +0000 UTC m=+17.066490923 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:28.486540 master-0 kubenswrapper[4430]: I1203 14:08:28.484057 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:28.560276 master-0 kubenswrapper[4430]: I1203 14:08:28.560074 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:08:28.564259 master-0 kubenswrapper[4430]: I1203 14:08:28.564214 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c8b4c9687-4pxw5"] Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584237 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584287 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584342 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584362 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584452 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584575 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584579 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584605 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584628 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584584 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584646 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584651 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584671 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584638 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584248 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584695 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: E1203 14:08:28.584708 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584727 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584673 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584760 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584806 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: E1203 14:08:28.584807 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584829 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584872 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:28.584849 master-0 kubenswrapper[4430]: I1203 14:08:28.584625 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.584904 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.584749 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.584621 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585010 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.585025 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.585033 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.584835 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.585047 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: I1203 14:08:28.585096 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585097 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585179 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585275 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585402 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585537 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585618 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585689 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585758 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585853 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:28.586117 master-0 kubenswrapper[4430]: E1203 14:08:28.585924 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586182 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586270 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586342 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586461 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586517 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586541 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586601 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586685 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:28.586866 master-0 kubenswrapper[4430]: E1203 14:08:28.586842 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:28.587169 master-0 kubenswrapper[4430]: E1203 14:08:28.586922 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:28.587263 master-0 kubenswrapper[4430]: I1203 14:08:28.584828 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:28.587507 master-0 kubenswrapper[4430]: E1203 14:08:28.587354 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:28.587507 master-0 kubenswrapper[4430]: E1203 14:08:28.587453 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:28.587909 master-0 kubenswrapper[4430]: E1203 14:08:28.587680 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:28.587909 master-0 kubenswrapper[4430]: E1203 14:08:28.587854 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:28.588738 master-0 kubenswrapper[4430]: E1203 14:08:28.588684 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:28.588864 master-0 kubenswrapper[4430]: E1203 14:08:28.588830 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:28.588930 master-0 kubenswrapper[4430]: E1203 14:08:28.588909 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:28.674014 master-0 kubenswrapper[4430]: I1203 14:08:28.673878 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:28.674014 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:28.674014 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:28.674014 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:28.674014 master-0 kubenswrapper[4430]: I1203 14:08:28.673949 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:28.674566 master-0 kubenswrapper[4430]: I1203 14:08:28.674044 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=333.674021336 podStartE2EDuration="5m33.674021336s" podCreationTimestamp="2025-12-03 14:02:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:08:28.67027629 +0000 UTC m=+9.293190366" watchObservedRunningTime="2025-12-03 14:08:28.674021336 +0000 UTC m=+9.296935412" Dec 03 14:08:28.763239 master-0 kubenswrapper[4430]: I1203 14:08:28.763134 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"887ada6287d98232addb4779c33abf88bd14342273f22a3807dcefc91a0fd10d"} Dec 03 14:08:28.764664 master-0 kubenswrapper[4430]: I1203 14:08:28.764633 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/2.log" Dec 03 14:08:28.765641 master-0 kubenswrapper[4430]: I1203 14:08:28.765615 4430 scope.go:117] "RemoveContainer" containerID="91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e" Dec 03 14:08:28.765804 master-0 kubenswrapper[4430]: E1203 14:08:28.765774 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46)\"" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" Dec 03 14:08:28.768313 master-0 kubenswrapper[4430]: I1203 14:08:28.768259 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="d19c7a974d3bce9d46bac309c56db4b9c19d9d8c638802fff953c547ffb3bbfc" exitCode=0 Dec 03 14:08:28.769329 master-0 kubenswrapper[4430]: I1203 14:08:28.769288 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"d19c7a974d3bce9d46bac309c56db4b9c19d9d8c638802fff953c547ffb3bbfc"} Dec 03 14:08:28.811406 master-0 kubenswrapper[4430]: I1203 14:08:28.811217 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=11.81118028 podStartE2EDuration="11.81118028s" podCreationTimestamp="2025-12-03 14:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:08:28.784752318 +0000 UTC m=+9.407666414" watchObservedRunningTime="2025-12-03 14:08:28.81118028 +0000 UTC m=+9.434094356" Dec 03 14:08:28.913377 master-0 kubenswrapper[4430]: I1203 14:08:28.912257 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:28.918197 master-0 kubenswrapper[4430]: I1203 14:08:28.918029 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:29.004793 master-0 kubenswrapper[4430]: I1203 14:08:29.004347 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:29.010971 master-0 kubenswrapper[4430]: I1203 14:08:29.010820 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:08:29.101837 master-0 kubenswrapper[4430]: I1203 14:08:29.101654 4430 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:08:29.102450 master-0 kubenswrapper[4430]: I1203 14:08:29.102391 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="c98a8d85d3901d33f6fe192bdc7172aa" containerName="startup-monitor" containerID="cri-o://dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e" gracePeriod=5 Dec 03 14:08:29.176643 master-0 kubenswrapper[4430]: I1203 14:08:29.176542 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:29.176643 master-0 kubenswrapper[4430]: I1203 14:08:29.176664 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.176703 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.176757 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.176825 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.176909 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.176975 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.177075 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.177139 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.177174 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.177201 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:29.177352 master-0 kubenswrapper[4430]: I1203 14:08:29.177314 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177395 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177510 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177531 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177563 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177592 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177668 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177687 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.177753 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178334 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178352 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178380 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178403 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178495 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:29.178646 master-0 kubenswrapper[4430]: I1203 14:08:29.178621 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: I1203 14:08:29.178727 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: I1203 14:08:29.178757 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: I1203 14:08:29.178776 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: I1203 14:08:29.178797 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.178951 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.178969 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.178979 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179027 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.17901323 +0000 UTC m=+17.801927306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179083 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179093 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179100 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179119 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179113173 +0000 UTC m=+17.802027249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179155 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179165 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179666 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179690 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179683329 +0000 UTC m=+17.802597405 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179235 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179710 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179716 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179735 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.17972953 +0000 UTC m=+17.802643596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179244 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179748 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179754 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179771 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179766121 +0000 UTC m=+17.802680197 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179285 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179786 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179802 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179797222 +0000 UTC m=+17.802711298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179287 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179817 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179824 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179840 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179835083 +0000 UTC m=+17.802749159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179323 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179854 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179860 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179878 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179872294 +0000 UTC m=+17.802786360 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179323 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179892 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179898 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179915 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179910815 +0000 UTC m=+17.802824891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179357 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179932 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179938 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179955 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179949566 +0000 UTC m=+17.802863642 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179367 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179970 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179977 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179995 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.179990158 +0000 UTC m=+17.802904234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179388 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180009 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180017 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180034 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180029179 +0000 UTC m=+17.802943255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179392 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180052 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180059 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180078 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.18007378 +0000 UTC m=+17.802987856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179402 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180093 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180100 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180116 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180112001 +0000 UTC m=+17.803026077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179436 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180132 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180139 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180157 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180151322 +0000 UTC m=+17.803065398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179457 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180168 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180184 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180178153 +0000 UTC m=+17.803092229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179461 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180200 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180207 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180225 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180218384 +0000 UTC m=+17.803132460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179475 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180239 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180245 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180262 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180256675 +0000 UTC m=+17.803170751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179489 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180278 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180284 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180299 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180294866 +0000 UTC m=+17.803208942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179509 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180318 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180325 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180342 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180337467 +0000 UTC m=+17.803251543 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179514 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180359 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180367 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180383 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180378309 +0000 UTC m=+17.803292385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179528 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180399 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180439 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180457 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180451461 +0000 UTC m=+17.803365537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179549 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180473 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180481 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180498 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180492272 +0000 UTC m=+17.803406358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179556 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180515 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180523 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180540 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180534843 +0000 UTC m=+17.803448919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179563 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180555 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180562 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180580 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180574394 +0000 UTC m=+17.803488470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179584 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180611 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180619 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180638 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180632716 +0000 UTC m=+17.803546792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179604 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180658 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180667 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180687 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180682087 +0000 UTC m=+17.803596163 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179605 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180703 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180710 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180728 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180723478 +0000 UTC m=+17.803637544 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179620 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180746 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180754 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.180772 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.18076752 +0000 UTC m=+17.803681596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.180340 master-0 kubenswrapper[4430]: E1203 14:08:29.179647 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180790 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180798 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180816 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180810731 +0000 UTC m=+17.803724807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.179647 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180833 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180841 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.199769 master-0 kubenswrapper[4430]: E1203 14:08:29.180861 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:37.180854962 +0000 UTC m=+17.803769038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:29.464136 master-0 kubenswrapper[4430]: I1203 14:08:29.463958 4430 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="739dc4db-558c-4492-aca2-06261f310a29" Dec 03 14:08:29.602690 master-0 kubenswrapper[4430]: I1203 14:08:29.602607 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:29.643559 master-0 kubenswrapper[4430]: I1203 14:08:29.643512 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:29.643719 master-0 kubenswrapper[4430]: I1203 14:08:29.643604 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:29.643719 master-0 kubenswrapper[4430]: I1203 14:08:29.643654 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:29.643719 master-0 kubenswrapper[4430]: I1203 14:08:29.643693 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:29.643845 master-0 kubenswrapper[4430]: I1203 14:08:29.643722 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:29.643845 master-0 kubenswrapper[4430]: I1203 14:08:29.643754 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:29.643845 master-0 kubenswrapper[4430]: I1203 14:08:29.643783 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:29.643845 master-0 kubenswrapper[4430]: I1203 14:08:29.643812 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:29.643845 master-0 kubenswrapper[4430]: I1203 14:08:29.643838 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:29.643986 master-0 kubenswrapper[4430]: I1203 14:08:29.643863 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:29.643986 master-0 kubenswrapper[4430]: I1203 14:08:29.643888 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:29.643986 master-0 kubenswrapper[4430]: I1203 14:08:29.643913 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:29.643986 master-0 kubenswrapper[4430]: I1203 14:08:29.643948 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:29.643986 master-0 kubenswrapper[4430]: I1203 14:08:29.643980 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:29.644116 master-0 kubenswrapper[4430]: I1203 14:08:29.644010 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:29.644116 master-0 kubenswrapper[4430]: I1203 14:08:29.644038 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:29.644116 master-0 kubenswrapper[4430]: I1203 14:08:29.644064 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:29.644230 master-0 kubenswrapper[4430]: E1203 14:08:29.644133 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:29.644230 master-0 kubenswrapper[4430]: I1203 14:08:29.644185 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:29.644230 master-0 kubenswrapper[4430]: I1203 14:08:29.644219 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:29.644358 master-0 kubenswrapper[4430]: I1203 14:08:29.644245 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:29.644358 master-0 kubenswrapper[4430]: I1203 14:08:29.644277 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:29.644358 master-0 kubenswrapper[4430]: I1203 14:08:29.644312 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:29.644358 master-0 kubenswrapper[4430]: I1203 14:08:29.644351 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:29.644559 master-0 kubenswrapper[4430]: I1203 14:08:29.644378 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:29.644559 master-0 kubenswrapper[4430]: I1203 14:08:29.644405 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:29.644559 master-0 kubenswrapper[4430]: E1203 14:08:29.644521 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:29.644647 master-0 kubenswrapper[4430]: I1203 14:08:29.644565 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:29.644647 master-0 kubenswrapper[4430]: I1203 14:08:29.644593 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:29.644647 master-0 kubenswrapper[4430]: I1203 14:08:29.644628 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:29.644777 master-0 kubenswrapper[4430]: I1203 14:08:29.644660 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:29.644777 master-0 kubenswrapper[4430]: E1203 14:08:29.644752 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:29.645057 master-0 kubenswrapper[4430]: I1203 14:08:29.644795 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:29.645057 master-0 kubenswrapper[4430]: I1203 14:08:29.644828 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:29.645057 master-0 kubenswrapper[4430]: E1203 14:08:29.644891 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:29.645057 master-0 kubenswrapper[4430]: I1203 14:08:29.644884 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:29.645057 master-0 kubenswrapper[4430]: I1203 14:08:29.644937 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:29.645643 master-0 kubenswrapper[4430]: E1203 14:08:29.645569 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:29.645713 master-0 kubenswrapper[4430]: E1203 14:08:29.645677 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:29.645748 master-0 kubenswrapper[4430]: E1203 14:08:29.645713 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:29.645788 master-0 kubenswrapper[4430]: E1203 14:08:29.645745 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:29.645788 master-0 kubenswrapper[4430]: E1203 14:08:29.645772 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:29.645888 master-0 kubenswrapper[4430]: E1203 14:08:29.645797 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:29.645888 master-0 kubenswrapper[4430]: E1203 14:08:29.645825 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:29.646120 master-0 kubenswrapper[4430]: E1203 14:08:29.646091 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:29.646168 master-0 kubenswrapper[4430]: E1203 14:08:29.646134 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:29.646227 master-0 kubenswrapper[4430]: E1203 14:08:29.646171 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:29.646684 master-0 kubenswrapper[4430]: E1203 14:08:29.646654 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:29.646779 master-0 kubenswrapper[4430]: E1203 14:08:29.646704 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:29.646779 master-0 kubenswrapper[4430]: E1203 14:08:29.646749 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:29.646857 master-0 kubenswrapper[4430]: E1203 14:08:29.646786 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:29.646857 master-0 kubenswrapper[4430]: E1203 14:08:29.646815 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:29.646857 master-0 kubenswrapper[4430]: E1203 14:08:29.646838 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:29.646945 master-0 kubenswrapper[4430]: E1203 14:08:29.646862 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:29.646945 master-0 kubenswrapper[4430]: E1203 14:08:29.646887 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:29.646945 master-0 kubenswrapper[4430]: E1203 14:08:29.643398 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:29.647157 master-0 kubenswrapper[4430]: E1203 14:08:29.647126 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:29.647216 master-0 kubenswrapper[4430]: E1203 14:08:29.647176 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:29.647760 master-0 kubenswrapper[4430]: E1203 14:08:29.647736 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:29.647866 master-0 kubenswrapper[4430]: E1203 14:08:29.647850 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:29.647945 master-0 kubenswrapper[4430]: E1203 14:08:29.647931 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:29.648033 master-0 kubenswrapper[4430]: E1203 14:08:29.648016 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:29.648154 master-0 kubenswrapper[4430]: E1203 14:08:29.648138 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:29.648236 master-0 kubenswrapper[4430]: E1203 14:08:29.648221 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:29.648312 master-0 kubenswrapper[4430]: E1203 14:08:29.648298 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:29.648440 master-0 kubenswrapper[4430]: E1203 14:08:29.648405 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:29.649488 master-0 kubenswrapper[4430]: E1203 14:08:29.649442 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:29.649571 master-0 kubenswrapper[4430]: I1203 14:08:29.649553 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12822200-5857-4e2a-96bf-31c2d917ae9e" path="/var/lib/kubelet/pods/12822200-5857-4e2a-96bf-31c2d917ae9e/volumes" Dec 03 14:08:29.678138 master-0 kubenswrapper[4430]: I1203 14:08:29.678091 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:29.678138 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:29.678138 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:29.678138 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:29.678510 master-0 kubenswrapper[4430]: I1203 14:08:29.678478 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:29.764104 master-0 kubenswrapper[4430]: E1203 14:08:29.763868 4430 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:29.777866 master-0 kubenswrapper[4430]: I1203 14:08:29.777797 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="f2f397c76852cb120c0fa16e0296e324eaa34e90212d0f20041f8e9ada985aa9" exitCode=0 Dec 03 14:08:29.777866 master-0 kubenswrapper[4430]: I1203 14:08:29.777888 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"f2f397c76852cb120c0fa16e0296e324eaa34e90212d0f20041f8e9ada985aa9"} Dec 03 14:08:29.787280 master-0 kubenswrapper[4430]: I1203 14:08:29.787258 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"f6fc0b8da5448f87611e04db90aca266d162bc2637b73ede6c1ca2a74107e8f9"} Dec 03 14:08:30.584034 master-0 kubenswrapper[4430]: I1203 14:08:30.583647 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:30.584034 master-0 kubenswrapper[4430]: I1203 14:08:30.584034 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583694 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583736 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583647 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583757 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583794 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583805 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583790 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583812 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583834 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583840 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583849 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583871 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583873 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583884 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583888 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583916 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583974 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.584005 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: I1203 14:08:30.583654 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:30.584336 master-0 kubenswrapper[4430]: E1203 14:08:30.584341 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584379 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584427 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584389 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584396 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584437 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584439 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584458 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584489 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584465 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584584 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: I1203 14:08:30.584614 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584665 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584778 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584836 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584883 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:30.584971 master-0 kubenswrapper[4430]: E1203 14:08:30.584924 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585004 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585043 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585113 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585167 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585216 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585282 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585347 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585390 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585444 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:30.585509 master-0 kubenswrapper[4430]: E1203 14:08:30.585502 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:30.585809 master-0 kubenswrapper[4430]: E1203 14:08:30.585554 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:30.585809 master-0 kubenswrapper[4430]: E1203 14:08:30.585601 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:30.585809 master-0 kubenswrapper[4430]: E1203 14:08:30.585656 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:30.585892 master-0 kubenswrapper[4430]: E1203 14:08:30.585822 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:30.585922 master-0 kubenswrapper[4430]: E1203 14:08:30.585893 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:30.586152 master-0 kubenswrapper[4430]: E1203 14:08:30.585966 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:30.586152 master-0 kubenswrapper[4430]: E1203 14:08:30.586098 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:30.586231 master-0 kubenswrapper[4430]: E1203 14:08:30.586189 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:30.586284 master-0 kubenswrapper[4430]: E1203 14:08:30.586264 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:30.586339 master-0 kubenswrapper[4430]: E1203 14:08:30.586318 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:30.586430 master-0 kubenswrapper[4430]: E1203 14:08:30.586387 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:30.586507 master-0 kubenswrapper[4430]: E1203 14:08:30.586484 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:30.586590 master-0 kubenswrapper[4430]: E1203 14:08:30.586560 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:30.586671 master-0 kubenswrapper[4430]: E1203 14:08:30.586619 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:30.676562 master-0 kubenswrapper[4430]: I1203 14:08:30.676480 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:30.676562 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:30.676562 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:30.676562 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:30.676947 master-0 kubenswrapper[4430]: I1203 14:08:30.676598 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759657 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759742 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759779 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759808 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759897 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.759913 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.759950 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: I1203 14:08:30.759961 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.759967 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.760054 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.760071 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760068 master-0 kubenswrapper[4430]: E1203 14:08:30.760083 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760098 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: I1203 14:08:30.760040 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760150 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760266 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760281 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760192 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760170242 +0000 UTC m=+19.383084368 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760035 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: I1203 14:08:30.760451 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760472 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760535 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760500102 +0000 UTC m=+19.383414178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760566 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760554933 +0000 UTC m=+19.383469179 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760583 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760598 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760606 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760206 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760661 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760646576 +0000 UTC m=+19.383560652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760663 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: I1203 14:08:30.760696 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760713218 +0000 UTC m=+19.383627434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760752 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760762 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760770 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: I1203 14:08:30.760765 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760797 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.76078766 +0000 UTC m=+19.383701736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.760779 master-0 kubenswrapper[4430]: E1203 14:08:30.760536 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760838 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760832631 +0000 UTC m=+19.383746707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760899 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760913 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760923 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.760952 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760958 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.760949985 +0000 UTC m=+19.383864141 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761034 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761048 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761056 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.759980 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761082 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761074738 +0000 UTC m=+19.383988924 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761089 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761108 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761126 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761116369 +0000 UTC m=+19.384030545 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761174 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761189 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761197 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761200 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761226 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761215902 +0000 UTC m=+19.384130088 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761266 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761277 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761284 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761309 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761302795 +0000 UTC m=+19.384216871 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.760219 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761331 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761341 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761367 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761329 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761378 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761387 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761370 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761362626 +0000 UTC m=+19.384276802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761514 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761573 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761591 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761599 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761630 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761619794 +0000 UTC m=+19.384533970 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761636 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761655 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761655 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761646664 +0000 UTC m=+19.384560870 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761574 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761665 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: I1203 14:08:30.761765 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761773 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761766918 +0000 UTC m=+19.384681104 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761907 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761921 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761927 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.762162 master-0 kubenswrapper[4430]: E1203 14:08:30.761950 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.761944073 +0000 UTC m=+19.384858149 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:30.764040 master-0 kubenswrapper[4430]: I1203 14:08:30.762893 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:30.764040 master-0 kubenswrapper[4430]: E1203 14:08:30.763319 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:30.764040 master-0 kubenswrapper[4430]: E1203 14:08:30.763338 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:30.764040 master-0 kubenswrapper[4430]: E1203 14:08:30.763369 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:38.763357163 +0000 UTC m=+19.386271239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:30.801455 master-0 kubenswrapper[4430]: I1203 14:08:30.799298 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"6c58206c8f470d87dc9d5a570f4eaec4253acfe9d6743f8cfb025a93e6bf3be3"} Dec 03 14:08:31.584298 master-0 kubenswrapper[4430]: I1203 14:08:31.584244 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:31.584549 master-0 kubenswrapper[4430]: I1203 14:08:31.584328 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:31.584549 master-0 kubenswrapper[4430]: I1203 14:08:31.584447 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:31.584549 master-0 kubenswrapper[4430]: I1203 14:08:31.584532 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:31.584549 master-0 kubenswrapper[4430]: E1203 14:08:31.584521 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584551 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584571 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584588 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: E1203 14:08:31.584653 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584664 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584693 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:31.584700 master-0 kubenswrapper[4430]: I1203 14:08:31.584703 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584729 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: E1203 14:08:31.584787 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584805 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584823 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584849 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584856 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: E1203 14:08:31.584899 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584914 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584921 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584944 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:31.584975 master-0 kubenswrapper[4430]: I1203 14:08:31.584959 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.584962 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: E1203 14:08:31.585053 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585083 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585096 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585096 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585129 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585300 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: E1203 14:08:31.585308 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585331 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585335 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:31.585386 master-0 kubenswrapper[4430]: I1203 14:08:31.585397 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: I1203 14:08:31.585443 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: I1203 14:08:31.585476 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: I1203 14:08:31.585483 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: E1203 14:08:31.585473 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: I1203 14:08:31.585508 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: E1203 14:08:31.585560 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: I1203 14:08:31.585583 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: E1203 14:08:31.585663 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:31.585758 master-0 kubenswrapper[4430]: E1203 14:08:31.585734 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.585794 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.585861 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.585933 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.586021 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.586086 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.586195 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:31.586294 master-0 kubenswrapper[4430]: E1203 14:08:31.586253 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:31.586528 master-0 kubenswrapper[4430]: E1203 14:08:31.586310 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:31.586528 master-0 kubenswrapper[4430]: E1203 14:08:31.586378 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:31.586528 master-0 kubenswrapper[4430]: E1203 14:08:31.586466 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:31.586528 master-0 kubenswrapper[4430]: E1203 14:08:31.586525 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:31.586670 master-0 kubenswrapper[4430]: E1203 14:08:31.586610 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:31.586796 master-0 kubenswrapper[4430]: E1203 14:08:31.586740 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:31.586862 master-0 kubenswrapper[4430]: E1203 14:08:31.586828 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:31.586906 master-0 kubenswrapper[4430]: E1203 14:08:31.586890 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:31.586968 master-0 kubenswrapper[4430]: E1203 14:08:31.586956 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:31.587039 master-0 kubenswrapper[4430]: E1203 14:08:31.587012 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:31.587289 master-0 kubenswrapper[4430]: E1203 14:08:31.587248 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:31.587454 master-0 kubenswrapper[4430]: E1203 14:08:31.587406 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:31.587524 master-0 kubenswrapper[4430]: E1203 14:08:31.587501 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:31.587621 master-0 kubenswrapper[4430]: E1203 14:08:31.587595 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:31.587779 master-0 kubenswrapper[4430]: E1203 14:08:31.587752 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:31.587920 master-0 kubenswrapper[4430]: E1203 14:08:31.587851 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:31.588200 master-0 kubenswrapper[4430]: E1203 14:08:31.588176 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:31.673599 master-0 kubenswrapper[4430]: I1203 14:08:31.673526 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:31.673599 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:31.673599 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:31.673599 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:31.673907 master-0 kubenswrapper[4430]: I1203 14:08:31.673624 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:32.584152 master-0 kubenswrapper[4430]: I1203 14:08:32.583813 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:32.584152 master-0 kubenswrapper[4430]: I1203 14:08:32.584148 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583862 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583901 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583933 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583939 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583966 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583974 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584014 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584262 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584033 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: E1203 14:08:32.584274 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584307 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584313 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584363 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584111 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584110 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584096 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584158 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584019 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584272 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584034 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584079 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584072 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.583839 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584341 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584352 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584158 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: E1203 14:08:32.584521 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584572 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584609 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584615 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: E1203 14:08:32.584698 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: I1203 14:08:32.584733 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:32.584989 master-0 kubenswrapper[4430]: E1203 14:08:32.584848 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585118 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585248 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585330 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585392 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585510 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585575 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585648 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585724 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585806 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585920 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:32.586002 master-0 kubenswrapper[4430]: E1203 14:08:32.585990 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:32.586391 master-0 kubenswrapper[4430]: E1203 14:08:32.586071 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:32.586391 master-0 kubenswrapper[4430]: E1203 14:08:32.586153 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:32.586391 master-0 kubenswrapper[4430]: E1203 14:08:32.586226 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:32.586391 master-0 kubenswrapper[4430]: E1203 14:08:32.586292 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:32.586543 master-0 kubenswrapper[4430]: E1203 14:08:32.586403 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:32.586543 master-0 kubenswrapper[4430]: E1203 14:08:32.586519 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:32.586742 master-0 kubenswrapper[4430]: E1203 14:08:32.586685 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:32.586805 master-0 kubenswrapper[4430]: E1203 14:08:32.586781 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:32.586868 master-0 kubenswrapper[4430]: E1203 14:08:32.586847 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:32.586963 master-0 kubenswrapper[4430]: E1203 14:08:32.586920 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:32.587064 master-0 kubenswrapper[4430]: E1203 14:08:32.587031 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:32.587176 master-0 kubenswrapper[4430]: E1203 14:08:32.587141 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:32.587237 master-0 kubenswrapper[4430]: E1203 14:08:32.587217 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:32.587298 master-0 kubenswrapper[4430]: E1203 14:08:32.587280 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:32.587399 master-0 kubenswrapper[4430]: E1203 14:08:32.587381 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:32.587510 master-0 kubenswrapper[4430]: E1203 14:08:32.587485 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:32.777549 master-0 kubenswrapper[4430]: I1203 14:08:32.777487 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:32.777549 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:32.777549 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:32.777549 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:32.777817 master-0 kubenswrapper[4430]: I1203 14:08:32.777557 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:32.814845 master-0 kubenswrapper[4430]: I1203 14:08:32.814770 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"876e3a6d236e0be4c450dc348094b65ed7c200ebe5e36f5297e4821af364dfde"} Dec 03 14:08:32.816833 master-0 kubenswrapper[4430]: I1203 14:08:32.816259 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:32.816833 master-0 kubenswrapper[4430]: I1203 14:08:32.816335 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:32.841297 master-0 kubenswrapper[4430]: I1203 14:08:32.841011 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:32.841599 master-0 kubenswrapper[4430]: I1203 14:08:32.841556 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:33.584183 master-0 kubenswrapper[4430]: I1203 14:08:33.584129 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:33.584754 master-0 kubenswrapper[4430]: E1203 14:08:33.584298 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:33.584754 master-0 kubenswrapper[4430]: I1203 14:08:33.584387 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:33.584754 master-0 kubenswrapper[4430]: E1203 14:08:33.584470 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:33.584754 master-0 kubenswrapper[4430]: I1203 14:08:33.584556 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:33.584754 master-0 kubenswrapper[4430]: E1203 14:08:33.584696 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:33.584957 master-0 kubenswrapper[4430]: I1203 14:08:33.584807 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:33.585064 master-0 kubenswrapper[4430]: I1203 14:08:33.584961 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:33.585313 master-0 kubenswrapper[4430]: I1203 14:08:33.585281 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:33.585352 master-0 kubenswrapper[4430]: E1203 14:08:33.585299 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:33.585352 master-0 kubenswrapper[4430]: I1203 14:08:33.585340 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:33.585427 master-0 kubenswrapper[4430]: I1203 14:08:33.585377 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:33.585427 master-0 kubenswrapper[4430]: I1203 14:08:33.585400 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:33.585486 master-0 kubenswrapper[4430]: I1203 14:08:33.585446 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:33.585486 master-0 kubenswrapper[4430]: I1203 14:08:33.585456 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:33.585486 master-0 kubenswrapper[4430]: I1203 14:08:33.585475 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585492 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585501 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585521 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585534 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585557 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585558 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585587 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:33.585603 master-0 kubenswrapper[4430]: I1203 14:08:33.585599 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:33.585812 master-0 kubenswrapper[4430]: E1203 14:08:33.585692 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:33.585812 master-0 kubenswrapper[4430]: I1203 14:08:33.585738 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:33.585812 master-0 kubenswrapper[4430]: I1203 14:08:33.585766 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:33.585812 master-0 kubenswrapper[4430]: I1203 14:08:33.585786 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:33.585812 master-0 kubenswrapper[4430]: I1203 14:08:33.585810 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.585833 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.585860 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.585888 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.585860 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: E1203 14:08:33.585957 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.585992 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.586019 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.586044 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:33.586076 master-0 kubenswrapper[4430]: I1203 14:08:33.586073 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: E1203 14:08:33.586147 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: I1203 14:08:33.586187 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: I1203 14:08:33.586226 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: E1203 14:08:33.586304 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: E1203 14:08:33.586377 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:33.586544 master-0 kubenswrapper[4430]: E1203 14:08:33.586456 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:33.586933 master-0 kubenswrapper[4430]: E1203 14:08:33.586693 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:33.586933 master-0 kubenswrapper[4430]: E1203 14:08:33.586821 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:33.586933 master-0 kubenswrapper[4430]: E1203 14:08:33.586923 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:33.587047 master-0 kubenswrapper[4430]: E1203 14:08:33.587015 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:33.587131 master-0 kubenswrapper[4430]: E1203 14:08:33.587098 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:33.587229 master-0 kubenswrapper[4430]: E1203 14:08:33.587194 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:33.587367 master-0 kubenswrapper[4430]: E1203 14:08:33.587326 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:33.587510 master-0 kubenswrapper[4430]: E1203 14:08:33.587447 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:33.587555 master-0 kubenswrapper[4430]: E1203 14:08:33.587535 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:33.587631 master-0 kubenswrapper[4430]: E1203 14:08:33.587606 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:33.587733 master-0 kubenswrapper[4430]: E1203 14:08:33.587688 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:33.587838 master-0 kubenswrapper[4430]: E1203 14:08:33.587803 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:33.587948 master-0 kubenswrapper[4430]: E1203 14:08:33.587915 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:33.588042 master-0 kubenswrapper[4430]: E1203 14:08:33.588017 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:33.588128 master-0 kubenswrapper[4430]: E1203 14:08:33.588102 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:33.588452 master-0 kubenswrapper[4430]: E1203 14:08:33.588405 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:33.588586 master-0 kubenswrapper[4430]: E1203 14:08:33.588522 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:33.588634 master-0 kubenswrapper[4430]: E1203 14:08:33.588592 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:33.588800 master-0 kubenswrapper[4430]: E1203 14:08:33.588695 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:33.588800 master-0 kubenswrapper[4430]: E1203 14:08:33.588764 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:33.589069 master-0 kubenswrapper[4430]: E1203 14:08:33.588833 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:33.589309 master-0 kubenswrapper[4430]: E1203 14:08:33.589275 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:33.589382 master-0 kubenswrapper[4430]: E1203 14:08:33.589356 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:33.589506 master-0 kubenswrapper[4430]: E1203 14:08:33.589481 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:33.674902 master-0 kubenswrapper[4430]: I1203 14:08:33.674825 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:33.674902 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:33.674902 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:33.674902 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:33.675233 master-0 kubenswrapper[4430]: I1203 14:08:33.674913 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:33.818818 master-0 kubenswrapper[4430]: I1203 14:08:33.818655 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:34.583553 master-0 kubenswrapper[4430]: I1203 14:08:34.583504 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:34.583553 master-0 kubenswrapper[4430]: I1203 14:08:34.583557 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: E1203 14:08:34.583636 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583510 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583677 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583776 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583696 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583709 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583715 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583677 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583760 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583740 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583928 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:34.583929 master-0 kubenswrapper[4430]: I1203 14:08:34.583938 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583943 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583960 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583965 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583981 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583995 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583997 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583944 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.583983 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584009 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584016 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584040 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584070 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584097 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584122 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584140 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584190 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584202 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: I1203 14:08:34.584251 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584288 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584355 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584407 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584574 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584628 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584676 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:34.584706 master-0 kubenswrapper[4430]: E1203 14:08:34.584723 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.584782 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.584834 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.584874 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.584947 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585005 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585059 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585111 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585187 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585249 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585292 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585356 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585431 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:34.585491 master-0 kubenswrapper[4430]: E1203 14:08:34.585488 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:34.585866 master-0 kubenswrapper[4430]: E1203 14:08:34.585597 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:34.585866 master-0 kubenswrapper[4430]: E1203 14:08:34.585652 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:34.585866 master-0 kubenswrapper[4430]: E1203 14:08:34.585702 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:34.585866 master-0 kubenswrapper[4430]: E1203 14:08:34.585758 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:34.585866 master-0 kubenswrapper[4430]: E1203 14:08:34.585807 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:34.586008 master-0 kubenswrapper[4430]: E1203 14:08:34.585872 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:34.586008 master-0 kubenswrapper[4430]: I1203 14:08:34.585905 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:34.586008 master-0 kubenswrapper[4430]: E1203 14:08:34.585963 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:34.674787 master-0 kubenswrapper[4430]: I1203 14:08:34.674689 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:34.674787 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:34.674787 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:34.674787 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:34.675167 master-0 kubenswrapper[4430]: I1203 14:08:34.674800 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:34.765870 master-0 kubenswrapper[4430]: E1203 14:08:34.765801 4430 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:34.804926 master-0 kubenswrapper[4430]: I1203 14:08:34.804896 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_c98a8d85d3901d33f6fe192bdc7172aa/startup-monitor/2.log" Dec 03 14:08:34.805119 master-0 kubenswrapper[4430]: I1203 14:08:34.804996 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:34.823227 master-0 kubenswrapper[4430]: I1203 14:08:34.823179 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_c98a8d85d3901d33f6fe192bdc7172aa/startup-monitor/2.log" Dec 03 14:08:34.823538 master-0 kubenswrapper[4430]: I1203 14:08:34.823261 4430 generic.go:334] "Generic (PLEG): container finished" podID="c98a8d85d3901d33f6fe192bdc7172aa" containerID="dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e" exitCode=137 Dec 03 14:08:34.823538 master-0 kubenswrapper[4430]: I1203 14:08:34.823362 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:08:34.823538 master-0 kubenswrapper[4430]: I1203 14:08:34.823470 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:34.823731 master-0 kubenswrapper[4430]: I1203 14:08:34.823562 4430 scope.go:117] "RemoveContainer" containerID="dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e" Dec 03 14:08:34.837478 master-0 kubenswrapper[4430]: I1203 14:08:34.837117 4430 scope.go:117] "RemoveContainer" containerID="dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e" Dec 03 14:08:34.837749 master-0 kubenswrapper[4430]: E1203 14:08:34.837706 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e\": container with ID starting with dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e not found: ID does not exist" containerID="dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e" Dec 03 14:08:34.837809 master-0 kubenswrapper[4430]: I1203 14:08:34.837750 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e"} err="failed to get container status \"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e\": rpc error: code = NotFound desc = could not find container \"dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e\": container with ID starting with dfcaf7c06f2d0c41a883d89353deb553feed1f9443d00e9b9839adba40945f0e not found: ID does not exist" Dec 03 14:08:34.893482 master-0 kubenswrapper[4430]: I1203 14:08:34.893307 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") pod \"c98a8d85d3901d33f6fe192bdc7172aa\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " Dec 03 14:08:34.893482 master-0 kubenswrapper[4430]: I1203 14:08:34.893491 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") pod \"c98a8d85d3901d33f6fe192bdc7172aa\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893526 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") pod \"c98a8d85d3901d33f6fe192bdc7172aa\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893686 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests" (OuterVolumeSpecName: "manifests") pod "c98a8d85d3901d33f6fe192bdc7172aa" (UID: "c98a8d85d3901d33f6fe192bdc7172aa"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893763 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") pod \"c98a8d85d3901d33f6fe192bdc7172aa\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893797 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") pod \"c98a8d85d3901d33f6fe192bdc7172aa\" (UID: \"c98a8d85d3901d33f6fe192bdc7172aa\") " Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893854 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log" (OuterVolumeSpecName: "var-log") pod "c98a8d85d3901d33f6fe192bdc7172aa" (UID: "c98a8d85d3901d33f6fe192bdc7172aa"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.893926 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock" (OuterVolumeSpecName: "var-lock") pod "c98a8d85d3901d33f6fe192bdc7172aa" (UID: "c98a8d85d3901d33f6fe192bdc7172aa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:34.894334 master-0 kubenswrapper[4430]: I1203 14:08:34.894016 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "c98a8d85d3901d33f6fe192bdc7172aa" (UID: "c98a8d85d3901d33f6fe192bdc7172aa"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:34.897554 master-0 kubenswrapper[4430]: I1203 14:08:34.897495 4430 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-manifests\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:34.897554 master-0 kubenswrapper[4430]: I1203 14:08:34.897534 4430 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-log\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:34.897554 master-0 kubenswrapper[4430]: I1203 14:08:34.897549 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:34.897554 master-0 kubenswrapper[4430]: I1203 14:08:34.897561 4430 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:34.904393 master-0 kubenswrapper[4430]: I1203 14:08:34.904300 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "c98a8d85d3901d33f6fe192bdc7172aa" (UID: "c98a8d85d3901d33f6fe192bdc7172aa"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:08:35.000481 master-0 kubenswrapper[4430]: I1203 14:08:35.000392 4430 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/c98a8d85d3901d33f6fe192bdc7172aa-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:08:35.583969 master-0 kubenswrapper[4430]: I1203 14:08:35.583854 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:35.583969 master-0 kubenswrapper[4430]: I1203 14:08:35.583919 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:35.583969 master-0 kubenswrapper[4430]: I1203 14:08:35.583952 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584007 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.583854 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584122 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584137 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.584134 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584168 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584193 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584223 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584242 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584247 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584296 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.584297 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584323 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584351 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584464 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584476 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584480 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584505 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584407 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584399 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584533 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584495 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584553 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584505 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584445 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584410 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584501 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.584811 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584909 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584945 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.584975 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.585024 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.585111 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.585154 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: I1203 14:08:35.585276 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.585389 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:35.585825 master-0 kubenswrapper[4430]: E1203 14:08:35.585628 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.585982 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.586230 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.586325 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.586524 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.586869 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.587077 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.587390 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.587655 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.587777 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.588073 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.589643 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.589791 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.590078 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.590298 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.590571 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.590772 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:35.591500 master-0 kubenswrapper[4430]: E1203 14:08:35.591303 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.591581 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.591792 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.591948 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.592151 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.592305 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.592492 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.592684 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:35.593183 master-0 kubenswrapper[4430]: E1203 14:08:35.593152 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:35.593931 master-0 kubenswrapper[4430]: E1203 14:08:35.593708 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:35.600099 master-0 kubenswrapper[4430]: E1203 14:08:35.599967 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:35.600585 master-0 kubenswrapper[4430]: E1203 14:08:35.600467 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:35.956582 master-0 kubenswrapper[4430]: I1203 14:08:35.953677 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:35.956582 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:35.956582 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:35.956582 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:35.956582 master-0 kubenswrapper[4430]: I1203 14:08:35.953744 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:35.960656 master-0 kubenswrapper[4430]: I1203 14:08:35.960548 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98a8d85d3901d33f6fe192bdc7172aa" path="/var/lib/kubelet/pods/c98a8d85d3901d33f6fe192bdc7172aa/volumes" Dec 03 14:08:35.961523 master-0 kubenswrapper[4430]: I1203 14:08:35.961488 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Dec 03 14:08:36.483934 master-0 kubenswrapper[4430]: I1203 14:08:36.483797 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:36.483934 master-0 kubenswrapper[4430]: I1203 14:08:36.483915 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.483960 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.483990 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484026 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484038 4430 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484096 4430 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484055 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484157 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484188 4430 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484170 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484139869 +0000 UTC m=+33.107054005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484139 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484260 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484295 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484122 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484330 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0 podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484315004 +0000 UTC m=+33.107229080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484350 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484342715 +0000 UTC m=+33.107256791 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484364 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484358096 +0000 UTC m=+33.107272302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484377 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs podName:22673f47-9484-4eed-bbce-888588c754ed nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484371966 +0000 UTC m=+33.107286122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs") pod "multus-admission-controller-5bdcc987c4-x99xc" (UID: "22673f47-9484-4eed-bbce-888588c754ed") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484399 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484442 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484498 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484505 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484537 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484551 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484536831 +0000 UTC m=+33.107450997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484580 4430 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484604 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484598012 +0000 UTC m=+33.107512148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: I1203 14:08:36.484600 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.484615 master-0 kubenswrapper[4430]: E1203 14:08:36.484644 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484666 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484660104 +0000 UTC m=+33.107574260 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.484640 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484673 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.484697 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484702 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484693815 +0000 UTC m=+33.107607891 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484714 4430 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484778 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484769477 +0000 UTC m=+33.107683543 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.484773 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484803 4430 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484845 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.484821089 +0000 UTC m=+33.107735165 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484847 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484855 4430 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484765 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484886 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48486014 +0000 UTC m=+33.107774216 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.484943 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.484974 4430 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485019 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485011874 +0000 UTC m=+33.107925950 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485036 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485028945 +0000 UTC m=+33.107943021 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485049 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485043275 +0000 UTC m=+33.107957351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485085 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485080966 +0000 UTC m=+33.107995042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485098 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485093197 +0000 UTC m=+33.108007263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.485116 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.485157 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485224 4430 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485253 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485244041 +0000 UTC m=+33.108158197 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.485182 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.485295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: I1203 14:08:36.485392 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485295 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:36.485396 master-0 kubenswrapper[4430]: E1203 14:08:36.485412 4430 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485357 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485476 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485500 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485488978 +0000 UTC m=+33.108403054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485520 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485558 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485569 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485601 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485591781 +0000 UTC m=+33.108505857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485608 4430 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485628 4430 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485617 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485610981 +0000 UTC m=+33.108525057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485665 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485689 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485675513 +0000 UTC m=+33.108589669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485763 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485784 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485775926 +0000 UTC m=+33.108690002 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485801 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485794667 +0000 UTC m=+33.108708733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485813 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485807607 +0000 UTC m=+33.108721683 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485817 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485845 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485869 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485876 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485896 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485901 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485894289 +0000 UTC m=+33.108808365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485941 4430 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485944 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485937991 +0000 UTC m=+33.108852067 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.485964 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.485958881 +0000 UTC m=+33.108872957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.485982 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486003 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.486010 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486041 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486056 4430 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486083 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486021543 +0000 UTC m=+33.108935619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.486136 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: I1203 14:08:36.486171 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486203 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486184958 +0000 UTC m=+33.109099034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-config" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486218 4430 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486240 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:36.486267 master-0 kubenswrapper[4430]: E1203 14:08:36.486244 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486229269 +0000 UTC m=+33.109143365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486339 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486368 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486385 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486376483 +0000 UTC m=+33.109290559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486448 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486433935 +0000 UTC m=+33.109348071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486478 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486509 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486518 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486509177 +0000 UTC m=+33.109423303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486505 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486540 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.486543 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.486534668 +0000 UTC m=+33.109448804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486590 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486617 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486642 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486664 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486689 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486712 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486735 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486760 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486781 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486804 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486841 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486870 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486899 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486921 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486940 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486959 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.486983 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487011 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487040 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487063 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487088 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487110 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487129 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487147 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487190 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487224 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487249 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487270 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487288 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487303 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487311 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487300709 +0000 UTC m=+33.110214865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: I1203 14:08:36.487333 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487349 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487372 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487436 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:36.487386 master-0 kubenswrapper[4430]: E1203 14:08:36.487448 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487440 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487484 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487508 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487524 4430 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.487370 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487531 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487560 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487576 4430 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487599 4430 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487599 4430 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487632 4430 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487632 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487651 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487379 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487371251 +0000 UTC m=+33.110285327 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487603 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487673 4430 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487681 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487688 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48767433 +0000 UTC m=+33.110588456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487578 4430 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487715 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487706251 +0000 UTC m=+33.110620417 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487468 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487741 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487733242 +0000 UTC m=+33.110647418 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487487 4430 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487770 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487779 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487771403 +0000 UTC m=+33.110685559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487490 4430 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487802 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487791703 +0000 UTC m=+33.110705859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487812 4430 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487374 4430 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487546 4430 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487580 4430 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487830 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487818714 +0000 UTC m=+33.110732790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487635 4430 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487892 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487884616 +0000 UTC m=+33.110798682 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487909 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487902507 +0000 UTC m=+33.110816583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.487924 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.487917477 +0000 UTC m=+33.110831553 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.487945 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.487975 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488009 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488050 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488064 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488073 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488121 4430 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488137 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488093482 +0000 UTC m=+33.111007618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488149 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488159 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488149654 +0000 UTC m=+33.111063810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488171 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488177 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488169634 +0000 UTC m=+33.111083790 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488237 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488271 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488294 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488276967 +0000 UTC m=+33.111191123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488312 4430 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488326 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488312298 +0000 UTC m=+33.111226594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488351 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488341879 +0000 UTC m=+33.111256035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488370 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48836473 +0000 UTC m=+33.111278806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488388 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48838024 +0000 UTC m=+33.111294316 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488439 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488404641 +0000 UTC m=+33.111318787 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488472 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488460472 +0000 UTC m=+33.111374598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488502 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488493593 +0000 UTC m=+33.111407749 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488523 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488514114 +0000 UTC m=+33.111428280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488543 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488533664 +0000 UTC m=+33.111447930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488571 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488555205 +0000 UTC m=+33.111469371 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488592 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488584016 +0000 UTC m=+33.111498172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488609 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488603186 +0000 UTC m=+33.111517382 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488623 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488617807 +0000 UTC m=+33.111532003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488641 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488633877 +0000 UTC m=+33.111548073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488657 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488651108 +0000 UTC m=+33.111565284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488675 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488667218 +0000 UTC m=+33.111581374 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488690 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488684499 +0000 UTC m=+33.111598685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488702 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488697959 +0000 UTC m=+33.111612035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488719 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48871425 +0000 UTC m=+33.111628406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488746 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48874046 +0000 UTC m=+33.111654606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488762 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488755391 +0000 UTC m=+33.111669567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488778 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488770771 +0000 UTC m=+33.111684947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488809 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488839 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488871 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488913 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488901425 +0000 UTC m=+33.111815501 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488922 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488932 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488921816 +0000 UTC m=+33.111836012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488872 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488949 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488941476 +0000 UTC m=+33.111855552 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488958 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.488974 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.488998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.488989047 +0000 UTC m=+33.111903193 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.489024 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.489025 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.489054 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489045539 +0000 UTC m=+33.111959665 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.489057 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.489083 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48907639 +0000 UTC m=+33.111990556 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.489141 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.489198 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.489226 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: I1203 14:08:36.489254 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.489313 master-0 kubenswrapper[4430]: E1203 14:08:36.489281 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489281 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489297 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489314 4430 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489503 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48941267 +0000 UTC m=+33.112326786 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489344 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489355 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489598 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489668 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489721 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489710428 +0000 UTC m=+33.112624564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489729 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489748 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489739779 +0000 UTC m=+33.112653965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489762 4430 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489773 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48976581 +0000 UTC m=+33.112679996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"service-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489790 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.48978281 +0000 UTC m=+33.112696996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489818 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489810091 +0000 UTC m=+33.112724257 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489841 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489872 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489899 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489917 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489896293 +0000 UTC m=+33.112810369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489924 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489949 4430 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.489950 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489973 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489955865 +0000 UTC m=+33.112870011 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489986 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.489998 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.489991426 +0000 UTC m=+33.112905502 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490014 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490016 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490009647 +0000 UTC m=+33.112923713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490030 4430 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490050 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490062 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490053768 +0000 UTC m=+33.112967944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490057 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490081 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490070878 +0000 UTC m=+33.112984954 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490102 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490126 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.49011958 +0000 UTC m=+33.113033656 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490163 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490199 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490216 4430 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490242 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490252 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490242763 +0000 UTC m=+33.113156909 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490279 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490294 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490344 4430 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490350 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490373 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490366177 +0000 UTC m=+33.113280253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490442 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490447 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490435249 +0000 UTC m=+33.113349325 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490481 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490502 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490515 4430 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490488 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490515 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490507551 +0000 UTC m=+33.113421627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490571 4430 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490554 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490610 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490591433 +0000 UTC m=+33.113505589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490636 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490654 4430 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490664 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490714 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490674645 +0000 UTC m=+33.113588801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490718 4430 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490747 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490740237 +0000 UTC m=+33.113654313 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490762 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490754698 +0000 UTC m=+33.113668774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490773 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490767668 +0000 UTC m=+33.113681744 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490780 4430 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490817 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490808169 +0000 UTC m=+33.113722295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490744 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490838 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490859 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490853881 +0000 UTC m=+33.113767957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490862 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490899 4430 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490919 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490914072 +0000 UTC m=+33.113828148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490898 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490949 4430 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.490949 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.490980 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.490970924 +0000 UTC m=+33.113885070 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491004 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491008 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491026 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491021105 +0000 UTC m=+33.113935181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491048 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491081 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491086 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491157 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491150399 +0000 UTC m=+33.114064475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491176 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491126 4430 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491269 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491289 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491282973 +0000 UTC m=+33.114197049 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491329 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491300523 +0000 UTC m=+33.114214669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491354 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491245 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491408 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491398666 +0000 UTC m=+33.114312842 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491464 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491491 4430 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491498 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491532 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491519089 +0000 UTC m=+33.114433235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491560 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491587 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491580411 +0000 UTC m=+33.114494487 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491557 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491609 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491636 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491628713 +0000 UTC m=+33.114542889 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491633 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491663 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491690 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491683384 +0000 UTC m=+33.114597460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491689 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491718 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491737 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491749 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491740466 +0000 UTC m=+33.114654632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491772 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491791 4430 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491801 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491831 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491860 4430 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491876 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491892 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.49188357 +0000 UTC m=+33.114797726 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491916 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491933 4430 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491952 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491965 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.491954442 +0000 UTC m=+33.114868608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.491987 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.491997 4430 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.492020 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492014174 +0000 UTC m=+33.114928250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: I1203 14:08:36.492018 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:36.492531 master-0 kubenswrapper[4430]: E1203 14:08:36.492040 4430 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492057 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492052525 +0000 UTC m=+33.114966601 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492081 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492100 4430 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492117 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492123 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492153 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492178 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492199 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492224 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492249 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492269 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492128 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492467 4430 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492477 4430 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492193 4430 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492235 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492518 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492525 4430 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492533 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492277 4430 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492293 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492281721 +0000 UTC m=+33.115195897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492326 4430 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492573 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492596 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.49258765 +0000 UTC m=+33.115501806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492330 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492632 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492653 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492644201 +0000 UTC m=+33.115558277 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492668 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492662672 +0000 UTC m=+33.115576748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492675 4430 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492370 4430 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492692 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492673992 +0000 UTC m=+33.115588158 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492709 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492702593 +0000 UTC m=+33.115616669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492374 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492724 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492717894 +0000 UTC m=+33.115631970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492738 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492733554 +0000 UTC m=+33.115647630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492748 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492744214 +0000 UTC m=+33.115658290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492408 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492764 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492755795 +0000 UTC m=+33.115669861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492783 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492778935 +0000 UTC m=+33.115693011 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492410 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492815 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492826 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492816596 +0000 UTC m=+33.115730762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492857 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492864 4430 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.492896 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492902 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492895919 +0000 UTC m=+33.115809995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492940 4430 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492939 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.49293039 +0000 UTC m=+33.115844556 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492976 4430 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.492980 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.492972401 +0000 UTC m=+33.115886477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493003 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493014 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493005062 +0000 UTC m=+33.115919248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493035 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493040 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493056 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493051783 +0000 UTC m=+33.115965859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493073 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493067224 +0000 UTC m=+33.115981300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493089 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493115 4430 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493115 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493148 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493155 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493145566 +0000 UTC m=+33.116059732 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493201 4430 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493239 4430 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493200 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493293 4430 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493311 4430 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493243 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493235308 +0000 UTC m=+33.116149384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493338 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493327941 +0000 UTC m=+33.116242097 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493354 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493347082 +0000 UTC m=+33.116261258 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493386 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493438 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493439 4430 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493459 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493447064 +0000 UTC m=+33.116361220 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493484 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493476235 +0000 UTC m=+33.116390411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493511 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493546 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493517 4430 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493538 4430 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493553 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493547327 +0000 UTC m=+33.116461403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493601 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493606 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493634 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493639 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.4936308 +0000 UTC m=+33.116544956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493639 4430 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493710 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493700702 +0000 UTC m=+33.116614878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493718 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493730 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493721812 +0000 UTC m=+33.116636008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493670 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493740 4430 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493769 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493762713 +0000 UTC m=+33.116676789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493814 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493789424 +0000 UTC m=+33.116703560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493875 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493939 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493689 4430 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.493979 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493987 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.493977509 +0000 UTC m=+33.116891735 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494027 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494046 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494059 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494077 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494067912 +0000 UTC m=+33.116982078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493698 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494101 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494120 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494112673 +0000 UTC m=+33.117026839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494147 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494182 4430 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494184 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494244 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494274 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494295 4430 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494308 4430 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494336 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494325449 +0000 UTC m=+33.117239605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: I1203 14:08:36.494306 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494357 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494434 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494379951 +0000 UTC m=+33.117294027 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494484 4430 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494509 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494502104 +0000 UTC m=+33.117416280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.493944 4430 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:36.496387 master-0 kubenswrapper[4430]: E1203 14:08:36.494533 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494527995 +0000 UTC m=+33.117442181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494560 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494553406 +0000 UTC m=+33.117467592 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494607 4430 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494617 4430 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494641 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494634248 +0000 UTC m=+33.117548314 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494656 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web podName:6cfc08c2-f287-40b8-bf28-4f884595e93c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494649499 +0000 UTC m=+33.117563575 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494665 4430 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494676 4430 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494687 4430 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494740 4430 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494709 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.4946978 +0000 UTC m=+33.117611966 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494807 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494794653 +0000 UTC m=+33.117708809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494828 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy podName:ff21a9a5-706f-4c71-bd0c-5586374f819a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494819423 +0000 UTC m=+33.117733599 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:08:36.499841 master-0 kubenswrapper[4430]: E1203 14:08:36.494843 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:52.494835334 +0000 UTC m=+33.117749500 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : object "openshift-console"/"console-config" not registered Dec 03 14:08:36.675801 master-0 kubenswrapper[4430]: I1203 14:08:36.675711 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:36.675801 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:36.675801 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:36.675801 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:36.676941 master-0 kubenswrapper[4430]: I1203 14:08:36.675821 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.910947 4430 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.327s" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911157 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911179 4430 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="739dc4db-558c-4492-aca2-06261f310a29" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911202 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911213 4430 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="739dc4db-558c-4492-aca2-06261f310a29" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911373 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911412 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.911476 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911585 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.911587 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911608 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911643 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911656 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911654 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911690 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911682 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911748 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911783 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.911779 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911800 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911791 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911822 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911826 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911832 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911906 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.911914 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911898 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911938 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911952 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911954 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911988 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911992 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.911994 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912031 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912030 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912015 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912036 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912060 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912001 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912044 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.912170 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912194 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912218 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912240 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912224 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912262 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912248 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912312 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912324 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912356 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912367 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912321 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912344 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912406 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912395 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912314 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912382 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912452 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.912489 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912546 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912526 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912589 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.912719 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912763 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.912827 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912953 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: I1203 14:08:36.912981 4430 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="739dc4db-558c-4492-aca2-06261f310a29" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.913669 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914203 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914331 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914370 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914402 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914451 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914409 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914504 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914578 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914643 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914667 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914688 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914695 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914814 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914866 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914884 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914951 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914951 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.914974 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915018 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915070 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915110 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915145 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915185 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915224 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915300 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915332 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915348 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915383 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915454 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915503 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915533 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915557 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915584 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915638 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915708 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915836 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915868 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915895 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915896 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915928 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915961 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915968 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:36.916281 master-0 kubenswrapper[4430]: E1203 14:08:36.915967 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:36.931929 master-0 kubenswrapper[4430]: I1203 14:08:36.931840 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 14:08:37.221837 master-0 kubenswrapper[4430]: I1203 14:08:37.221762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:37.222120 master-0 kubenswrapper[4430]: I1203 14:08:37.222005 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:37.222120 master-0 kubenswrapper[4430]: E1203 14:08:37.222010 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:37.222120 master-0 kubenswrapper[4430]: E1203 14:08:37.222052 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.222120 master-0 kubenswrapper[4430]: E1203 14:08:37.222067 4430 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222310 master-0 kubenswrapper[4430]: E1203 14:08:37.222132 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222113123 +0000 UTC m=+33.845027189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222310 master-0 kubenswrapper[4430]: I1203 14:08:37.222240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:37.222444 master-0 kubenswrapper[4430]: E1203 14:08:37.222293 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.222444 master-0 kubenswrapper[4430]: E1203 14:08:37.222344 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.222444 master-0 kubenswrapper[4430]: E1203 14:08:37.222360 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222444 master-0 kubenswrapper[4430]: I1203 14:08:37.222373 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:37.222444 master-0 kubenswrapper[4430]: I1203 14:08:37.222437 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222458 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222489 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222506 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: I1203 14:08:37.222505 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222529 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222551 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222560 4430 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222576 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222557656 +0000 UTC m=+33.845471772 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222603 4430 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222618 4430 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.222655 master-0 kubenswrapper[4430]: E1203 14:08:37.222626 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222652 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222724 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222723 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222602587 +0000 UTC m=+33.845516713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222741 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222908 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222879905 +0000 UTC m=+33.845794151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222938 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222926866 +0000 UTC m=+33.845841152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.222957 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.222948177 +0000 UTC m=+33.845862463 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: I1203 14:08:37.223045 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: I1203 14:08:37.223139 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223175 4430 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223192 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223233 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.223214984 +0000 UTC m=+33.846129060 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223242 4430 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: I1203 14:08:37.223177 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223260 4430 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.223259 master-0 kubenswrapper[4430]: E1203 14:08:37.223270 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: I1203 14:08:37.223362 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223394 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.223383769 +0000 UTC m=+33.846298025 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223464 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223481 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223488 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223509 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223556 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223588 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223517 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.223506393 +0000 UTC m=+33.846420469 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.223877 master-0 kubenswrapper[4430]: E1203 14:08:37.223785 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.22377478 +0000 UTC m=+33.846688856 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.224447 master-0 kubenswrapper[4430]: I1203 14:08:37.224404 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:37.224556 master-0 kubenswrapper[4430]: I1203 14:08:37.224524 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:37.224607 master-0 kubenswrapper[4430]: I1203 14:08:37.224557 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:37.224607 master-0 kubenswrapper[4430]: I1203 14:08:37.224593 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:37.224687 master-0 kubenswrapper[4430]: I1203 14:08:37.224636 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:37.224687 master-0 kubenswrapper[4430]: E1203 14:08:37.224643 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224687 master-0 kubenswrapper[4430]: E1203 14:08:37.224660 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.224687 master-0 kubenswrapper[4430]: E1203 14:08:37.224671 4430 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.224827 master-0 kubenswrapper[4430]: E1203 14:08:37.224698 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.224691096 +0000 UTC m=+33.847605172 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.224827 master-0 kubenswrapper[4430]: E1203 14:08:37.224693 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224827 master-0 kubenswrapper[4430]: I1203 14:08:37.224733 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224734 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: I1203 14:08:37.224883 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224883 4430 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224915 4430 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224922 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.224900252 +0000 UTC m=+33.847814328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224925 4430 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224741 4430 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224952 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.224968 master-0 kubenswrapper[4430]: E1203 14:08:37.224970 4430 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.224955 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.224949064 +0000 UTC m=+33.847863130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.224998 4430 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.224782 4430 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.224776 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225046 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.225034466 +0000 UTC m=+33.847948782 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225049 4430 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225059 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225082 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225140 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.225123519 +0000 UTC m=+33.848037675 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.224980 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225189 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225083 4430 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225297 master-0 kubenswrapper[4430]: E1203 14:08:37.225232 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.225220322 +0000 UTC m=+33.848134398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225331 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225375 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225396 4430 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225405 4430 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225453 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.225438368 +0000 UTC m=+33.848352594 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225487 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.225479239 +0000 UTC m=+33.848393525 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225526 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225575 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225610 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225665 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225769 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225665 4430 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225805 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225821 4430 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225825 4430 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225838 4430 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: E1203 14:08:37.225844 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.225840 master-0 kubenswrapper[4430]: I1203 14:08:37.225838 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225860 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225896 4430 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225898 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225913 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225921 4430 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225728 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225969 4430 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225992 4430 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225904 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225736 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226089 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226102 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225774 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226140 4430 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226151 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.225882 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.22587076 +0000 UTC m=+33.848785016 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226323 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226311923 +0000 UTC m=+33.849225999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226344 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226337673 +0000 UTC m=+33.849251749 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226359 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226350774 +0000 UTC m=+33.849264850 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226371 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226365734 +0000 UTC m=+33.849279810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226388 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226381995 +0000 UTC m=+33.849296071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226389 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: I1203 14:08:37.226316 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226413 4430 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226446 4430 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226399 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226393985 +0000 UTC m=+33.849308061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: E1203 14:08:37.226594 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.22658576 +0000 UTC m=+33.849499836 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: I1203 14:08:37.226654 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:37.226644 master-0 kubenswrapper[4430]: I1203 14:08:37.226691 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: I1203 14:08:37.226815 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226841 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226872 4430 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226889 4430 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226938 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: I1203 14:08:37.226888 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226949 4430 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226985 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226993 4430 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226997 4430 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227010 4430 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227028 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.226965 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.226943021 +0000 UTC m=+33.849857137 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227043 4430 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227051 4430 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: I1203 14:08:37.227076 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227087 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.227073034 +0000 UTC m=+33.849987290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227126 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227135 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227142 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227165 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.227157777 +0000 UTC m=+33.850071853 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227177 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.227170647 +0000 UTC m=+33.850084723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.227752 master-0 kubenswrapper[4430]: E1203 14:08:37.227206 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:53.227199318 +0000 UTC m=+33.850113394 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:37.584138 master-0 kubenswrapper[4430]: I1203 14:08:37.584010 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:37.584138 master-0 kubenswrapper[4430]: I1203 14:08:37.584087 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584033 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: E1203 14:08:37.584160 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584207 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584235 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584235 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: E1203 14:08:37.584269 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584215 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584283 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584301 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584317 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: I1203 14:08:37.584345 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:37.584407 master-0 kubenswrapper[4430]: E1203 14:08:37.584409 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:37.584963 master-0 kubenswrapper[4430]: I1203 14:08:37.584436 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:37.584963 master-0 kubenswrapper[4430]: I1203 14:08:37.584469 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:37.584963 master-0 kubenswrapper[4430]: E1203 14:08:37.584610 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:37.584963 master-0 kubenswrapper[4430]: E1203 14:08:37.584728 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:37.585213 master-0 kubenswrapper[4430]: E1203 14:08:37.585086 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:37.585213 master-0 kubenswrapper[4430]: E1203 14:08:37.585174 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:37.585313 master-0 kubenswrapper[4430]: E1203 14:08:37.585252 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:37.585376 master-0 kubenswrapper[4430]: E1203 14:08:37.585348 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:37.585608 master-0 kubenswrapper[4430]: E1203 14:08:37.585556 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:37.585720 master-0 kubenswrapper[4430]: E1203 14:08:37.585666 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:37.585823 master-0 kubenswrapper[4430]: E1203 14:08:37.585792 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:37.585953 master-0 kubenswrapper[4430]: E1203 14:08:37.585909 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:37.672662 master-0 kubenswrapper[4430]: I1203 14:08:37.672575 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:37.672662 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:37.672662 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:37.672662 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:37.672662 master-0 kubenswrapper[4430]: I1203 14:08:37.672670 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.585943 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.585979 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586023 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586148 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586155 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.586145 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586207 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586221 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586219 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586376 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586382 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.586389 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586460 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586499 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586505 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586517 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586561 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586569 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586599 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586606 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586625 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586641 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586643 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586629 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586705 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586745 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586762 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586765 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.586760 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586806 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586821 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586842 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586861 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586871 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586880 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586846 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586893 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586922 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586932 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586938 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586952 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.586923 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587023 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.587031 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587059 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587074 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587090 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587099 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587133 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587146 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587146 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587168 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587175 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.587286 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587528 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587534 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: I1203 14:08:38.587570 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.587623 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.587856 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.588144 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.588269 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589214 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589324 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589469 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589532 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589624 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589697 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.589796 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.590002 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:38.590098 master-0 kubenswrapper[4430]: E1203 14:08:38.590097 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.590284 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.590400 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.590518 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591049 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591456 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591616 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591696 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591794 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591881 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592096 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.591975 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592179 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592284 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592375 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592569 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592486 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592830 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.592996 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.593194 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.593356 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.593501 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.593646 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:38.594327 master-0 kubenswrapper[4430]: E1203 14:08:38.593735 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:38.597046 master-0 kubenswrapper[4430]: E1203 14:08:38.596973 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:38.597210 master-0 kubenswrapper[4430]: E1203 14:08:38.597090 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:38.597210 master-0 kubenswrapper[4430]: E1203 14:08:38.597187 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:38.597292 master-0 kubenswrapper[4430]: E1203 14:08:38.597259 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:38.597404 master-0 kubenswrapper[4430]: E1203 14:08:38.597373 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:38.597535 master-0 kubenswrapper[4430]: E1203 14:08:38.597505 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:38.597622 master-0 kubenswrapper[4430]: E1203 14:08:38.597591 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:38.597701 master-0 kubenswrapper[4430]: E1203 14:08:38.597671 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:38.598247 master-0 kubenswrapper[4430]: E1203 14:08:38.598082 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:38.598651 master-0 kubenswrapper[4430]: E1203 14:08:38.598541 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:38.598812 master-0 kubenswrapper[4430]: E1203 14:08:38.598785 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:38.604988 master-0 kubenswrapper[4430]: I1203 14:08:38.604930 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:08:38.676271 master-0 kubenswrapper[4430]: I1203 14:08:38.676187 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:38.676271 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:38.676271 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:38.676271 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:38.676640 master-0 kubenswrapper[4430]: I1203 14:08:38.676349 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:38.800542 master-0 kubenswrapper[4430]: I1203 14:08:38.800449 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:38.800906 master-0 kubenswrapper[4430]: E1203 14:08:38.800819 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:38.801010 master-0 kubenswrapper[4430]: E1203 14:08:38.800913 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:38.801106 master-0 kubenswrapper[4430]: E1203 14:08:38.801047 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.801000162 +0000 UTC m=+35.423914358 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:38.801804 master-0 kubenswrapper[4430]: I1203 14:08:38.801754 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:38.802118 master-0 kubenswrapper[4430]: I1203 14:08:38.801867 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:38.802118 master-0 kubenswrapper[4430]: I1203 14:08:38.801938 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:38.802118 master-0 kubenswrapper[4430]: E1203 14:08:38.802090 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.802248 master-0 kubenswrapper[4430]: E1203 14:08:38.802124 4430 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.802248 master-0 kubenswrapper[4430]: E1203 14:08:38.802140 4430 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802248 master-0 kubenswrapper[4430]: I1203 14:08:38.802161 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:38.802248 master-0 kubenswrapper[4430]: E1203 14:08:38.802218 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802196016 +0000 UTC m=+35.425110092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802306 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802326 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802337 4430 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802359 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802377 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.802392 master-0 kubenswrapper[4430]: E1203 14:08:38.802386 4430 projected.go:194] Error preparing data for projected volume kube-api-access-nddv9 for pod openshift-console/console-648d88c756-vswh8: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802402 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802382281 +0000 UTC m=+35.425296357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: I1203 14:08:38.802306 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802447 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9 podName:62f94ae7-6043-4761-a16b-e0f072b1364b nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802414432 +0000 UTC m=+35.425328508 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nddv9" (UniqueName: "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9") pod "console-648d88c756-vswh8" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802091 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802464 4430 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802475 4430 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: I1203 14:08:38.802473 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802499 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802492895 +0000 UTC m=+35.425406971 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802544 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802561 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: I1203 14:08:38.802559 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:38.802592 master-0 kubenswrapper[4430]: E1203 14:08:38.802570 4430 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802594 4430 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: I1203 14:08:38.802599 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802631 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802654 4430 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802667 4430 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802676 4430 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802606 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802595977 +0000 UTC m=+35.425510123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802707 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802725 4430 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802758 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: I1203 14:08:38.802779 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802791 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802781293 +0000 UTC m=+35.425695569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: I1203 14:08:38.802812 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802820 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802812244 +0000 UTC m=+35.425726510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802837 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802831604 +0000 UTC m=+35.425745680 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802876 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802889 4430 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802897 4430 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: I1203 14:08:38.802955 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802958 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802975 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.802965018 +0000 UTC m=+35.425879084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802984 4430 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.802993 4430 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803057 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803066 4430 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803072 4430 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803092 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.803086181 +0000 UTC m=+35.426000257 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: I1203 14:08:38.803070 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803109 4430 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803121 4430 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.803104 master-0 kubenswrapper[4430]: E1203 14:08:38.803127 4430 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.803155 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.803149103 +0000 UTC m=+35.426063179 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.803173 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.803163884 +0000 UTC m=+35.426077960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: I1203 14:08:38.803190 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: I1203 14:08:38.803282 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: I1203 14:08:38.803326 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: I1203 14:08:38.803358 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: I1203 14:08:38.803463 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804104 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804115 4430 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804121 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lxlb8 for pod openshift-controller-manager/controller-manager-78d987764b-xcs5w: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804141 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8 podName:d3200abb-a440-44db-8897-79c809c1d838 nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.804134801 +0000 UTC m=+35.427048877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lxlb8" (UniqueName: "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8") pod "controller-manager-78d987764b-xcs5w" (UID: "d3200abb-a440-44db-8897-79c809c1d838") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804181 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804191 4430 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804197 4430 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804225 master-0 kubenswrapper[4430]: E1203 14:08:38.804216 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.804210773 +0000 UTC m=+35.427124849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804253 4430 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804261 4430 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804269 4430 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804287 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.804281326 +0000 UTC m=+35.427195402 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804326 4430 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804334 4430 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804341 4430 projected.go:194] Error preparing data for projected volume kube-api-access-gfzrw for pod openshift-console/console-c5d7cd7f9-2hp75: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804368 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw podName:4dd1d142-6569-438d-b0c2-582aed44812d nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.804353878 +0000 UTC m=+35.427267954 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gfzrw" (UniqueName: "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw") pod "console-c5d7cd7f9-2hp75" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804408 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804434 4430 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804443 4430 projected.go:194] Error preparing data for projected volume kube-api-access-lq4dz for pod openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:38.804722 master-0 kubenswrapper[4430]: E1203 14:08:38.804477 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz podName:1ba502ba-1179-478e-b4b9-f3409320b0ad nodeName:}" failed. No retries permitted until 2025-12-03 14:08:54.804466661 +0000 UTC m=+35.427380947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lq4dz" (UniqueName: "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz") pod "route-controller-manager-678c7f799b-4b7nv" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:08:39.583691 master-0 kubenswrapper[4430]: I1203 14:08:39.583533 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:39.583691 master-0 kubenswrapper[4430]: I1203 14:08:39.583603 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:39.583691 master-0 kubenswrapper[4430]: I1203 14:08:39.583619 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:39.584050 master-0 kubenswrapper[4430]: I1203 14:08:39.583751 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: E1203 14:08:39.589612 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: I1203 14:08:39.589666 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: I1203 14:08:39.589719 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: I1203 14:08:39.589740 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: I1203 14:08:39.589704 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:39.589782 master-0 kubenswrapper[4430]: I1203 14:08:39.589772 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: I1203 14:08:39.589768 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: E1203 14:08:39.589847 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: I1203 14:08:39.589855 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: I1203 14:08:39.589873 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: E1203 14:08:39.589954 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: I1203 14:08:39.590006 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: E1203 14:08:39.590057 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: E1203 14:08:39.590098 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:39.590138 master-0 kubenswrapper[4430]: E1203 14:08:39.590143 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590196 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590258 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590320 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590370 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590446 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590815 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:39.590999 master-0 kubenswrapper[4430]: E1203 14:08:39.590872 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:39.674128 master-0 kubenswrapper[4430]: I1203 14:08:39.674078 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:39.674128 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:39.674128 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:39.674128 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:39.674590 master-0 kubenswrapper[4430]: I1203 14:08:39.674559 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:39.767200 master-0 kubenswrapper[4430]: E1203 14:08:39.767151 4430 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: I1203 14:08:40.583660 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: I1203 14:08:40.583724 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: I1203 14:08:40.583693 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: I1203 14:08:40.583773 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: I1203 14:08:40.583900 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:40.583956 master-0 kubenswrapper[4430]: E1203 14:08:40.583896 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.583986 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: E1203 14:08:40.584045 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: E1203 14:08:40.584102 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584105 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584130 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584148 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584283 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584525 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584538 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584570 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584588 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584592 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: E1203 14:08:40.584706 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:40.584708 master-0 kubenswrapper[4430]: I1203 14:08:40.584721 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584732 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584764 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584777 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584786 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584808 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584825 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584832 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584843 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584860 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: E1203 14:08:40.584870 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584884 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584878 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584879 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584880 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584911 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584925 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584911 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584926 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584904 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: E1203 14:08:40.584973 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.584994 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:40.585296 master-0 kubenswrapper[4430]: I1203 14:08:40.585068 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: E1203 14:08:40.585496 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585536 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585548 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585554 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585574 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585589 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585595 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585600 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585622 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585628 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585644 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: E1203 14:08:40.585725 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585750 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585775 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585782 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585788 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585795 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: E1203 14:08:40.585880 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: I1203 14:08:40.585916 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:40.586117 master-0 kubenswrapper[4430]: E1203 14:08:40.586104 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586219 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586250 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586373 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586496 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586585 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:40.586711 master-0 kubenswrapper[4430]: E1203 14:08:40.586672 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:40.586938 master-0 kubenswrapper[4430]: E1203 14:08:40.586766 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:40.586938 master-0 kubenswrapper[4430]: E1203 14:08:40.586871 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:40.586997 master-0 kubenswrapper[4430]: E1203 14:08:40.586969 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:40.587206 master-0 kubenswrapper[4430]: E1203 14:08:40.587172 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:40.587293 master-0 kubenswrapper[4430]: E1203 14:08:40.587266 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:40.587377 master-0 kubenswrapper[4430]: E1203 14:08:40.587344 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:40.587761 master-0 kubenswrapper[4430]: E1203 14:08:40.587728 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:40.587880 master-0 kubenswrapper[4430]: E1203 14:08:40.587852 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:40.587972 master-0 kubenswrapper[4430]: E1203 14:08:40.587938 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:40.588085 master-0 kubenswrapper[4430]: E1203 14:08:40.588052 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:40.588134 master-0 kubenswrapper[4430]: E1203 14:08:40.588112 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:40.588219 master-0 kubenswrapper[4430]: E1203 14:08:40.588200 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:40.588326 master-0 kubenswrapper[4430]: E1203 14:08:40.588271 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:40.588362 master-0 kubenswrapper[4430]: E1203 14:08:40.588336 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:40.588470 master-0 kubenswrapper[4430]: E1203 14:08:40.588449 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:40.588535 master-0 kubenswrapper[4430]: E1203 14:08:40.588509 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:40.588709 master-0 kubenswrapper[4430]: E1203 14:08:40.588657 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:40.588742 master-0 kubenswrapper[4430]: E1203 14:08:40.588723 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:40.588873 master-0 kubenswrapper[4430]: E1203 14:08:40.588826 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:40.588905 master-0 kubenswrapper[4430]: E1203 14:08:40.588882 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:40.588964 master-0 kubenswrapper[4430]: E1203 14:08:40.588945 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:40.589039 master-0 kubenswrapper[4430]: E1203 14:08:40.589016 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:40.589135 master-0 kubenswrapper[4430]: E1203 14:08:40.589091 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:40.589221 master-0 kubenswrapper[4430]: E1203 14:08:40.589198 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:40.589581 master-0 kubenswrapper[4430]: E1203 14:08:40.589396 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:40.589581 master-0 kubenswrapper[4430]: E1203 14:08:40.589531 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:40.589650 master-0 kubenswrapper[4430]: E1203 14:08:40.589608 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:40.589761 master-0 kubenswrapper[4430]: E1203 14:08:40.589736 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:40.589922 master-0 kubenswrapper[4430]: E1203 14:08:40.589898 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:40.590000 master-0 kubenswrapper[4430]: E1203 14:08:40.589979 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:40.590082 master-0 kubenswrapper[4430]: E1203 14:08:40.590052 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:40.590206 master-0 kubenswrapper[4430]: E1203 14:08:40.590180 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:40.590289 master-0 kubenswrapper[4430]: E1203 14:08:40.590265 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:40.590786 master-0 kubenswrapper[4430]: E1203 14:08:40.590314 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:40.590786 master-0 kubenswrapper[4430]: E1203 14:08:40.590402 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:40.590786 master-0 kubenswrapper[4430]: E1203 14:08:40.590492 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:40.673851 master-0 kubenswrapper[4430]: I1203 14:08:40.673764 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:40.673851 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:40.673851 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:40.673851 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:40.674181 master-0 kubenswrapper[4430]: I1203 14:08:40.673866 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:41.584123 master-0 kubenswrapper[4430]: I1203 14:08:41.584061 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:41.584390 master-0 kubenswrapper[4430]: E1203 14:08:41.584196 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:41.584390 master-0 kubenswrapper[4430]: I1203 14:08:41.584264 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:41.584390 master-0 kubenswrapper[4430]: E1203 14:08:41.584320 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:41.584390 master-0 kubenswrapper[4430]: I1203 14:08:41.584374 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:41.584688 master-0 kubenswrapper[4430]: E1203 14:08:41.584617 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:41.584688 master-0 kubenswrapper[4430]: I1203 14:08:41.584643 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:41.584688 master-0 kubenswrapper[4430]: I1203 14:08:41.584657 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:41.584688 master-0 kubenswrapper[4430]: I1203 14:08:41.584658 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:41.584688 master-0 kubenswrapper[4430]: I1203 14:08:41.584687 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584707 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584715 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584749 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584715 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584759 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: E1203 14:08:41.584755 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: E1203 14:08:41.584881 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:41.584920 master-0 kubenswrapper[4430]: I1203 14:08:41.584789 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:41.585237 master-0 kubenswrapper[4430]: E1203 14:08:41.584944 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:41.585237 master-0 kubenswrapper[4430]: E1203 14:08:41.585054 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:41.585237 master-0 kubenswrapper[4430]: E1203 14:08:41.585121 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:41.585400 master-0 kubenswrapper[4430]: E1203 14:08:41.585240 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:41.585400 master-0 kubenswrapper[4430]: E1203 14:08:41.585323 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:41.585400 master-0 kubenswrapper[4430]: E1203 14:08:41.585389 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:41.585572 master-0 kubenswrapper[4430]: E1203 14:08:41.585490 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:41.585609 master-0 kubenswrapper[4430]: E1203 14:08:41.585594 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:41.674282 master-0 kubenswrapper[4430]: I1203 14:08:41.674128 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:41.674282 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:41.674282 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:41.674282 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:41.675014 master-0 kubenswrapper[4430]: I1203 14:08:41.674298 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:42.584537 master-0 kubenswrapper[4430]: I1203 14:08:42.584400 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:42.584537 master-0 kubenswrapper[4430]: I1203 14:08:42.584476 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: E1203 14:08:42.584666 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584705 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584706 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584737 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584767 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584778 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:42.584844 master-0 kubenswrapper[4430]: I1203 14:08:42.584843 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: I1203 14:08:42.584793 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: I1203 14:08:42.584921 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: I1203 14:08:42.584975 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: I1203 14:08:42.585118 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: I1203 14:08:42.585135 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:42.585158 master-0 kubenswrapper[4430]: E1203 14:08:42.585137 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585152 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585198 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585195 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585193 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585229 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585247 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585231 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585266 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585297 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585302 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585315 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585272 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585319 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585353 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585355 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:42.585366 master-0 kubenswrapper[4430]: I1203 14:08:42.585280 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585382 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585206 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585259 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585331 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585328 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585530 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585553 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585584 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585366 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585286 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585303 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585387 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585315 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585277 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: E1203 14:08:42.585510 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585214 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585534 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585350 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585565 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585567 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:42.585970 master-0 kubenswrapper[4430]: I1203 14:08:42.585354 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: I1203 14:08:42.586033 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: I1203 14:08:42.586100 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: I1203 14:08:42.586124 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: E1203 14:08:42.585964 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: E1203 14:08:42.586212 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: E1203 14:08:42.586302 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: E1203 14:08:42.586477 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:42.586740 master-0 kubenswrapper[4430]: E1203 14:08:42.586681 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:42.587036 master-0 kubenswrapper[4430]: E1203 14:08:42.586829 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:42.587036 master-0 kubenswrapper[4430]: E1203 14:08:42.586914 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:42.587108 master-0 kubenswrapper[4430]: E1203 14:08:42.587036 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:42.587162 master-0 kubenswrapper[4430]: E1203 14:08:42.587140 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:42.587249 master-0 kubenswrapper[4430]: E1203 14:08:42.587216 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:42.587294 master-0 kubenswrapper[4430]: E1203 14:08:42.587280 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:42.587370 master-0 kubenswrapper[4430]: E1203 14:08:42.587343 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:42.587629 master-0 kubenswrapper[4430]: E1203 14:08:42.587577 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:42.587680 master-0 kubenswrapper[4430]: E1203 14:08:42.587632 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:42.587932 master-0 kubenswrapper[4430]: E1203 14:08:42.587865 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:42.588034 master-0 kubenswrapper[4430]: E1203 14:08:42.587990 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:42.588111 master-0 kubenswrapper[4430]: E1203 14:08:42.588085 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:42.588204 master-0 kubenswrapper[4430]: E1203 14:08:42.588179 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:42.588282 master-0 kubenswrapper[4430]: E1203 14:08:42.588253 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:42.588472 master-0 kubenswrapper[4430]: E1203 14:08:42.588400 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:42.588595 master-0 kubenswrapper[4430]: E1203 14:08:42.588570 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:42.588812 master-0 kubenswrapper[4430]: E1203 14:08:42.588784 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:42.589186 master-0 kubenswrapper[4430]: E1203 14:08:42.589132 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:42.589298 master-0 kubenswrapper[4430]: E1203 14:08:42.589268 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:42.589380 master-0 kubenswrapper[4430]: E1203 14:08:42.589345 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:42.589545 master-0 kubenswrapper[4430]: E1203 14:08:42.589511 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:42.589651 master-0 kubenswrapper[4430]: E1203 14:08:42.589625 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:42.589729 master-0 kubenswrapper[4430]: E1203 14:08:42.589704 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:42.589809 master-0 kubenswrapper[4430]: E1203 14:08:42.589785 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:42.589876 master-0 kubenswrapper[4430]: E1203 14:08:42.589854 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:42.589974 master-0 kubenswrapper[4430]: E1203 14:08:42.589951 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:42.591177 master-0 kubenswrapper[4430]: E1203 14:08:42.590270 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.590375 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.590480 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.590575 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.590820 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.590888 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: E1203 14:08:42.591034 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:42.591259 master-0 kubenswrapper[4430]: I1203 14:08:42.591119 4430 scope.go:117] "RemoveContainer" containerID="91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e" Dec 03 14:08:42.591673 master-0 kubenswrapper[4430]: E1203 14:08:42.591284 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:42.591673 master-0 kubenswrapper[4430]: E1203 14:08:42.591453 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:42.591673 master-0 kubenswrapper[4430]: E1203 14:08:42.591576 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:42.591784 master-0 kubenswrapper[4430]: E1203 14:08:42.591682 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:42.591835 master-0 kubenswrapper[4430]: E1203 14:08:42.591796 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:42.591939 master-0 kubenswrapper[4430]: E1203 14:08:42.591897 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:42.592037 master-0 kubenswrapper[4430]: E1203 14:08:42.592000 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:42.592096 master-0 kubenswrapper[4430]: E1203 14:08:42.592060 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:42.592289 master-0 kubenswrapper[4430]: E1203 14:08:42.592251 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:42.592362 master-0 kubenswrapper[4430]: E1203 14:08:42.592343 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:42.592655 master-0 kubenswrapper[4430]: E1203 14:08:42.592486 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:42.675013 master-0 kubenswrapper[4430]: I1203 14:08:42.674942 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:42.675013 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:42.675013 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:42.675013 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:42.675846 master-0 kubenswrapper[4430]: I1203 14:08:42.675053 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:42.988150 master-0 kubenswrapper[4430]: I1203 14:08:42.988070 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/2.log" Dec 03 14:08:42.988742 master-0 kubenswrapper[4430]: I1203 14:08:42.988702 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"068ddb4d161d39aa30c2725ce031626c21271c908564c6ab6d59dc24ea4c3c49"} Dec 03 14:08:43.584109 master-0 kubenswrapper[4430]: I1203 14:08:43.583973 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: E1203 14:08:43.584144 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: I1203 14:08:43.584249 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: E1203 14:08:43.584310 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: I1203 14:08:43.584366 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: E1203 14:08:43.584473 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: I1203 14:08:43.584494 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: I1203 14:08:43.584518 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:43.584528 master-0 kubenswrapper[4430]: I1203 14:08:43.584005 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:43.584821 master-0 kubenswrapper[4430]: E1203 14:08:43.584589 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:08:43.584821 master-0 kubenswrapper[4430]: E1203 14:08:43.584643 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:08:43.584821 master-0 kubenswrapper[4430]: E1203 14:08:43.584710 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" Dec 03 14:08:43.585160 master-0 kubenswrapper[4430]: I1203 14:08:43.585115 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:43.585391 master-0 kubenswrapper[4430]: E1203 14:08:43.585346 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:08:43.585500 master-0 kubenswrapper[4430]: I1203 14:08:43.585399 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:43.585554 master-0 kubenswrapper[4430]: E1203 14:08:43.585498 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:08:43.585554 master-0 kubenswrapper[4430]: I1203 14:08:43.585516 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:43.585554 master-0 kubenswrapper[4430]: I1203 14:08:43.585537 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:43.585554 master-0 kubenswrapper[4430]: I1203 14:08:43.585558 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:43.585689 master-0 kubenswrapper[4430]: I1203 14:08:43.585577 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:43.585689 master-0 kubenswrapper[4430]: E1203 14:08:43.585630 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:08:43.585752 master-0 kubenswrapper[4430]: E1203 14:08:43.585702 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:08:43.585822 master-0 kubenswrapper[4430]: I1203 14:08:43.585359 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:43.585981 master-0 kubenswrapper[4430]: E1203 14:08:43.585901 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:08:43.586119 master-0 kubenswrapper[4430]: E1203 14:08:43.586086 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:08:43.586212 master-0 kubenswrapper[4430]: E1203 14:08:43.586188 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:08:43.673988 master-0 kubenswrapper[4430]: I1203 14:08:43.673908 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:43.673988 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:43.673988 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:43.673988 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:43.673988 master-0 kubenswrapper[4430]: I1203 14:08:43.673976 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:44.583379 master-0 kubenswrapper[4430]: I1203 14:08:44.583333 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:44.583379 master-0 kubenswrapper[4430]: I1203 14:08:44.583369 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: I1203 14:08:44.583400 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: I1203 14:08:44.583357 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: I1203 14:08:44.583347 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: E1203 14:08:44.583471 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: I1203 14:08:44.583333 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:44.584051 master-0 kubenswrapper[4430]: E1203 14:08:44.583922 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:08:44.584278 master-0 kubenswrapper[4430]: E1203 14:08:44.584065 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:08:44.584278 master-0 kubenswrapper[4430]: I1203 14:08:44.584101 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:44.584278 master-0 kubenswrapper[4430]: E1203 14:08:44.583969 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:08:44.584278 master-0 kubenswrapper[4430]: I1203 14:08:44.584241 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:44.584536 master-0 kubenswrapper[4430]: I1203 14:08:44.584504 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:44.584951 master-0 kubenswrapper[4430]: E1203 14:08:44.584790 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:08:44.585017 master-0 kubenswrapper[4430]: I1203 14:08:44.584949 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:44.585017 master-0 kubenswrapper[4430]: I1203 14:08:44.584989 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:44.585091 master-0 kubenswrapper[4430]: I1203 14:08:44.585036 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:44.585130 master-0 kubenswrapper[4430]: I1203 14:08:44.585089 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:44.585165 master-0 kubenswrapper[4430]: I1203 14:08:44.585127 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:44.585195 master-0 kubenswrapper[4430]: I1203 14:08:44.585169 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:44.585195 master-0 kubenswrapper[4430]: I1203 14:08:44.585149 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:44.585487 master-0 kubenswrapper[4430]: I1203 14:08:44.585368 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:44.585487 master-0 kubenswrapper[4430]: I1203 14:08:44.585432 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:44.585487 master-0 kubenswrapper[4430]: I1203 14:08:44.585454 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:44.585618 master-0 kubenswrapper[4430]: I1203 14:08:44.585462 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:44.585618 master-0 kubenswrapper[4430]: I1203 14:08:44.585522 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:44.585618 master-0 kubenswrapper[4430]: I1203 14:08:44.585473 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:44.585618 master-0 kubenswrapper[4430]: I1203 14:08:44.585595 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585616 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585670 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585703 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585627 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: E1203 14:08:44.585711 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585782 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:44.585799 master-0 kubenswrapper[4430]: I1203 14:08:44.585771 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585812 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585834 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585879 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585939 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585950 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:44.586016 master-0 kubenswrapper[4430]: I1203 14:08:44.585990 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586034 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586057 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586079 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586121 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: E1203 14:08:44.586158 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586209 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:44.586234 master-0 kubenswrapper[4430]: I1203 14:08:44.586220 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586275 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586289 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586307 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586368 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586384 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:44.586732 master-0 kubenswrapper[4430]: I1203 14:08:44.586636 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:44.586981 master-0 kubenswrapper[4430]: I1203 14:08:44.586840 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:44.586981 master-0 kubenswrapper[4430]: E1203 14:08:44.586885 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:08:44.587093 master-0 kubenswrapper[4430]: I1203 14:08:44.586996 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.587255 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: I1203 14:08:44.587400 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.587560 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.588007 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.588189 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.588454 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.588676 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.588889 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.589043 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.589236 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.589476 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.589640 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.589833 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.590022 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.590192 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.590348 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: I1203 14:08:44.590492 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.590889 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.591075 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.591291 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:08:44.591629 master-0 kubenswrapper[4430]: E1203 14:08:44.591442 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: E1203 14:08:44.591773 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: E1203 14:08:44.591988 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: E1203 14:08:44.592155 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: E1203 14:08:44.592295 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: E1203 14:08:44.592396 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:08:44.592661 master-0 kubenswrapper[4430]: I1203 14:08:44.584708 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:44.592955 master-0 kubenswrapper[4430]: E1203 14:08:44.592798 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:08:44.593009 master-0 kubenswrapper[4430]: E1203 14:08:44.592955 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:08:44.593158 master-0 kubenswrapper[4430]: E1203 14:08:44.593101 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:08:44.593285 master-0 kubenswrapper[4430]: E1203 14:08:44.593218 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:08:44.594387 master-0 kubenswrapper[4430]: E1203 14:08:44.594115 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:08:44.594387 master-0 kubenswrapper[4430]: E1203 14:08:44.594191 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:08:44.594764 master-0 kubenswrapper[4430]: E1203 14:08:44.594450 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:08:44.594764 master-0 kubenswrapper[4430]: E1203 14:08:44.594500 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:08:44.594764 master-0 kubenswrapper[4430]: E1203 14:08:44.594584 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:08:44.594764 master-0 kubenswrapper[4430]: E1203 14:08:44.594662 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:08:44.594764 master-0 kubenswrapper[4430]: E1203 14:08:44.594735 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:08:44.595017 master-0 kubenswrapper[4430]: E1203 14:08:44.594831 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:08:44.595017 master-0 kubenswrapper[4430]: E1203 14:08:44.594909 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" Dec 03 14:08:44.595146 master-0 kubenswrapper[4430]: E1203 14:08:44.595118 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" Dec 03 14:08:44.595377 master-0 kubenswrapper[4430]: E1203 14:08:44.595195 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:08:44.595377 master-0 kubenswrapper[4430]: E1203 14:08:44.595246 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:08:44.595377 master-0 kubenswrapper[4430]: E1203 14:08:44.595299 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:08:44.595377 master-0 kubenswrapper[4430]: E1203 14:08:44.595346 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:08:44.595579 master-0 kubenswrapper[4430]: E1203 14:08:44.595405 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:08:44.595579 master-0 kubenswrapper[4430]: E1203 14:08:44.595473 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:08:44.675099 master-0 kubenswrapper[4430]: I1203 14:08:44.674978 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:44.675099 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:44.675099 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:44.675099 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:44.675502 master-0 kubenswrapper[4430]: I1203 14:08:44.675466 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:45.635916 master-0 kubenswrapper[4430]: I1203 14:08:45.635859 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:45.635916 master-0 kubenswrapper[4430]: I1203 14:08:45.635901 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:45.635916 master-0 kubenswrapper[4430]: I1203 14:08:45.635859 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:45.635916 master-0 kubenswrapper[4430]: I1203 14:08:45.635909 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.635902 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.635950 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.635930 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.635857 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.635915 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636032 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636173 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636212 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636220 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636269 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636249 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636234 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:45.636674 master-0 kubenswrapper[4430]: I1203 14:08:45.636596 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:45.639256 master-0 kubenswrapper[4430]: I1203 14:08:45.639228 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 14:08:45.639447 master-0 kubenswrapper[4430]: I1203 14:08:45.639405 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 14:08:45.639507 master-0 kubenswrapper[4430]: I1203 14:08:45.639494 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640770 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-jmtqw" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640829 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640835 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640845 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640770 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640921 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640970 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.640999 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.641468 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.641507 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.641619 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.641851 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 03 14:08:45.643370 master-0 kubenswrapper[4430]: I1203 14:08:45.642677 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 14:08:45.644580 master-0 kubenswrapper[4430]: I1203 14:08:45.644549 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 14:08:45.644652 master-0 kubenswrapper[4430]: I1203 14:08:45.644582 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 14:08:45.644652 master-0 kubenswrapper[4430]: I1203 14:08:45.644602 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 14:08:45.647291 master-0 kubenswrapper[4430]: I1203 14:08:45.647255 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 03 14:08:45.647396 master-0 kubenswrapper[4430]: I1203 14:08:45.647383 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 14:08:45.650769 master-0 kubenswrapper[4430]: I1203 14:08:45.650723 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-twpdm" Dec 03 14:08:45.653157 master-0 kubenswrapper[4430]: I1203 14:08:45.653048 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 14:08:45.653229 master-0 kubenswrapper[4430]: I1203 14:08:45.653173 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 14:08:45.653402 master-0 kubenswrapper[4430]: I1203 14:08:45.653345 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 14:08:45.653490 master-0 kubenswrapper[4430]: I1203 14:08:45.653479 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 14:08:45.653831 master-0 kubenswrapper[4430]: I1203 14:08:45.653789 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 14:08:45.654066 master-0 kubenswrapper[4430]: I1203 14:08:45.654042 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 14:08:45.654926 master-0 kubenswrapper[4430]: I1203 14:08:45.654293 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 14:08:45.654926 master-0 kubenswrapper[4430]: I1203 14:08:45.654409 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 14:08:45.655491 master-0 kubenswrapper[4430]: I1203 14:08:45.655399 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 14:08:45.655694 master-0 kubenswrapper[4430]: I1203 14:08:45.655581 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 14:08:45.655694 master-0 kubenswrapper[4430]: I1203 14:08:45.655656 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 14:08:45.655810 master-0 kubenswrapper[4430]: I1203 14:08:45.655721 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 14:08:45.655810 master-0 kubenswrapper[4430]: I1203 14:08:45.655780 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 14:08:45.655915 master-0 kubenswrapper[4430]: I1203 14:08:45.655886 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 14:08:45.655965 master-0 kubenswrapper[4430]: I1203 14:08:45.655946 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 14:08:45.656024 master-0 kubenswrapper[4430]: I1203 14:08:45.655980 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 14:08:45.656313 master-0 kubenswrapper[4430]: I1203 14:08:45.656123 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 14:08:45.656313 master-0 kubenswrapper[4430]: I1203 14:08:45.656191 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 14:08:45.656486 master-0 kubenswrapper[4430]: I1203 14:08:45.656390 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:08:45.656705 master-0 kubenswrapper[4430]: I1203 14:08:45.656621 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2fgkw" Dec 03 14:08:45.656705 master-0 kubenswrapper[4430]: I1203 14:08:45.656642 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 14:08:45.656803 master-0 kubenswrapper[4430]: I1203 14:08:45.656784 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 14:08:45.656870 master-0 kubenswrapper[4430]: I1203 14:08:45.656848 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 14:08:45.656953 master-0 kubenswrapper[4430]: I1203 14:08:45.656933 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 14:08:45.657036 master-0 kubenswrapper[4430]: I1203 14:08:45.657018 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 14:08:45.657088 master-0 kubenswrapper[4430]: I1203 14:08:45.657053 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 14:08:45.657137 master-0 kubenswrapper[4430]: I1203 14:08:45.657126 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 14:08:45.657217 master-0 kubenswrapper[4430]: I1203 14:08:45.657197 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 14:08:45.657509 master-0 kubenswrapper[4430]: I1203 14:08:45.657463 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l6rgr" Dec 03 14:08:45.673770 master-0 kubenswrapper[4430]: I1203 14:08:45.673723 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 14:08:45.675171 master-0 kubenswrapper[4430]: I1203 14:08:45.675117 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:45.675171 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:45.675171 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:45.675171 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:45.675386 master-0 kubenswrapper[4430]: I1203 14:08:45.675205 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:45.675609 master-0 kubenswrapper[4430]: I1203 14:08:45.675485 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 14:08:45.918240 master-0 kubenswrapper[4430]: I1203 14:08:45.918075 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:45.918482 master-0 kubenswrapper[4430]: I1203 14:08:45.918312 4430 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:08:45.941545 master-0 kubenswrapper[4430]: I1203 14:08:45.941457 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:08:46.584353 master-0 kubenswrapper[4430]: I1203 14:08:46.584284 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584389 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584492 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584534 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584545 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584505 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:46.584740 master-0 kubenswrapper[4430]: I1203 14:08:46.584516 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584760 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584777 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584841 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584808 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584828 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584797 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584303 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584897 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584910 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584953 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584988 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.584993 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585040 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585044 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585067 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585115 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585130 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:46.585194 master-0 kubenswrapper[4430]: I1203 14:08:46.585162 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.585197 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.585627 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.586760 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.586810 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.586846 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:46.586909 master-0 kubenswrapper[4430]: I1203 14:08:46.586891 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.586930 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.586953 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.586987 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587021 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587061 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587083 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587140 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587188 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587308 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587346 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587367 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:46.587573 master-0 kubenswrapper[4430]: I1203 14:08:46.587468 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:46.588311 master-0 kubenswrapper[4430]: I1203 14:08:46.587695 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:46.588311 master-0 kubenswrapper[4430]: I1203 14:08:46.587723 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:46.588311 master-0 kubenswrapper[4430]: I1203 14:08:46.587916 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:46.588311 master-0 kubenswrapper[4430]: I1203 14:08:46.587919 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:46.588311 master-0 kubenswrapper[4430]: I1203 14:08:46.588153 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.593204 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.593488 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.593618 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.593810 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.594134 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.594331 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.594514 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.594710 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 14:08:46.594941 master-0 kubenswrapper[4430]: I1203 14:08:46.594804 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 14:08:46.596287 master-0 kubenswrapper[4430]: I1203 14:08:46.596250 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 14:08:46.596346 master-0 kubenswrapper[4430]: I1203 14:08:46.596298 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.596597 master-0 kubenswrapper[4430]: I1203 14:08:46.596573 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 14:08:46.599539 master-0 kubenswrapper[4430]: I1203 14:08:46.599508 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 14:08:46.599539 master-0 kubenswrapper[4430]: I1203 14:08:46.599521 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 14:08:46.599685 master-0 kubenswrapper[4430]: I1203 14:08:46.599648 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.599733 master-0 kubenswrapper[4430]: I1203 14:08:46.599684 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 14:08:46.599774 master-0 kubenswrapper[4430]: I1203 14:08:46.599737 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 14:08:46.599872 master-0 kubenswrapper[4430]: I1203 14:08:46.599843 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 14:08:46.600567 master-0 kubenswrapper[4430]: I1203 14:08:46.600540 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.600636 master-0 kubenswrapper[4430]: I1203 14:08:46.600616 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 14:08:46.600813 master-0 kubenswrapper[4430]: I1203 14:08:46.600784 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 14:08:46.600953 master-0 kubenswrapper[4430]: I1203 14:08:46.600933 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 14:08:46.601067 master-0 kubenswrapper[4430]: I1203 14:08:46.601050 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 14:08:46.601116 master-0 kubenswrapper[4430]: I1203 14:08:46.601089 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:08:46.601220 master-0 kubenswrapper[4430]: I1203 14:08:46.601202 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:08:46.601340 master-0 kubenswrapper[4430]: I1203 14:08:46.601327 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 03 14:08:46.601391 master-0 kubenswrapper[4430]: I1203 14:08:46.601362 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 14:08:46.601465 master-0 kubenswrapper[4430]: I1203 14:08:46.601452 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 03 14:08:46.601551 master-0 kubenswrapper[4430]: I1203 14:08:46.601530 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 14:08:46.601663 master-0 kubenswrapper[4430]: I1203 14:08:46.601642 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 14:08:46.601764 master-0 kubenswrapper[4430]: I1203 14:08:46.601748 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 14:08:46.601807 master-0 kubenswrapper[4430]: I1203 14:08:46.601653 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 14:08:46.601897 master-0 kubenswrapper[4430]: I1203 14:08:46.601874 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 14:08:46.601942 master-0 kubenswrapper[4430]: I1203 14:08:46.601905 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bc14vqi7sofg" Dec 03 14:08:46.601942 master-0 kubenswrapper[4430]: I1203 14:08:46.601918 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 14:08:46.602619 master-0 kubenswrapper[4430]: I1203 14:08:46.602592 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 14:08:46.602718 master-0 kubenswrapper[4430]: I1203 14:08:46.602692 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.603820 master-0 kubenswrapper[4430]: I1203 14:08:46.603788 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 03 14:08:46.603959 master-0 kubenswrapper[4430]: I1203 14:08:46.603921 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:08:46.604145 master-0 kubenswrapper[4430]: I1203 14:08:46.603967 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 14:08:46.604145 master-0 kubenswrapper[4430]: I1203 14:08:46.604053 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 14:08:46.604145 master-0 kubenswrapper[4430]: I1203 14:08:46.604111 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 14:08:46.604293 master-0 kubenswrapper[4430]: I1203 14:08:46.604073 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 03 14:08:46.604463 master-0 kubenswrapper[4430]: I1203 14:08:46.604445 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.604594 master-0 kubenswrapper[4430]: I1203 14:08:46.604569 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:08:46.604741 master-0 kubenswrapper[4430]: I1203 14:08:46.604717 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 03 14:08:46.604741 master-0 kubenswrapper[4430]: I1203 14:08:46.604737 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 14:08:46.604847 master-0 kubenswrapper[4430]: I1203 14:08:46.604740 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 03 14:08:46.604847 master-0 kubenswrapper[4430]: I1203 14:08:46.603964 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:08:46.604847 master-0 kubenswrapper[4430]: I1203 14:08:46.604835 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 14:08:46.604847 master-0 kubenswrapper[4430]: I1203 14:08:46.604842 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 14:08:46.605006 master-0 kubenswrapper[4430]: I1203 14:08:46.604735 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 14:08:46.605006 master-0 kubenswrapper[4430]: I1203 14:08:46.604912 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:08:46.605006 master-0 kubenswrapper[4430]: I1203 14:08:46.604961 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 14:08:46.605130 master-0 kubenswrapper[4430]: I1203 14:08:46.605115 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:08:46.605219 master-0 kubenswrapper[4430]: I1203 14:08:46.605202 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 14:08:46.605284 master-0 kubenswrapper[4430]: I1203 14:08:46.603925 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 14:08:46.606114 master-0 kubenswrapper[4430]: I1203 14:08:46.605638 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 03 14:08:46.606114 master-0 kubenswrapper[4430]: I1203 14:08:46.605729 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 14:08:46.606114 master-0 kubenswrapper[4430]: I1203 14:08:46.605962 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613119 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613283 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-2blfd" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613290 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613307 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613314 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613341 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7n524" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613369 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 14:08:46.613452 master-0 kubenswrapper[4430]: I1203 14:08:46.613290 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 14:08:46.613951 master-0 kubenswrapper[4430]: I1203 14:08:46.613520 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.613951 master-0 kubenswrapper[4430]: I1203 14:08:46.613798 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 14:08:46.614254 master-0 kubenswrapper[4430]: I1203 14:08:46.614168 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614464 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614614 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614705 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614791 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614871 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614953 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614977 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.614995 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 14:08:46.615105 master-0 kubenswrapper[4430]: I1203 14:08:46.615036 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 14:08:46.615521 master-0 kubenswrapper[4430]: I1203 14:08:46.615173 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:08:46.615713 master-0 kubenswrapper[4430]: I1203 14:08:46.614953 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 14:08:46.615713 master-0 kubenswrapper[4430]: I1203 14:08:46.615681 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:08:46.615836 master-0 kubenswrapper[4430]: I1203 14:08:46.615789 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:08:46.615836 master-0 kubenswrapper[4430]: I1203 14:08:46.615816 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:08:46.617364 master-0 kubenswrapper[4430]: I1203 14:08:46.617270 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 14:08:46.618735 master-0 kubenswrapper[4430]: I1203 14:08:46.618331 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 14:08:46.619180 master-0 kubenswrapper[4430]: I1203 14:08:46.619091 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:08:46.619180 master-0 kubenswrapper[4430]: I1203 14:08:46.619128 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:08:46.624488 master-0 kubenswrapper[4430]: I1203 14:08:46.622090 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 14:08:46.624761 master-0 kubenswrapper[4430]: I1203 14:08:46.624531 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 14:08:46.626635 master-0 kubenswrapper[4430]: I1203 14:08:46.626605 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 14:08:46.628623 master-0 kubenswrapper[4430]: I1203 14:08:46.628593 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 14:08:46.629033 master-0 kubenswrapper[4430]: I1203 14:08:46.629004 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:08:46.652403 master-0 kubenswrapper[4430]: I1203 14:08:46.652365 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:08:46.659473 master-0 kubenswrapper[4430]: I1203 14:08:46.659442 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 14:08:46.673046 master-0 kubenswrapper[4430]: I1203 14:08:46.672989 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:46.673046 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:46.673046 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:46.673046 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:46.673333 master-0 kubenswrapper[4430]: I1203 14:08:46.673054 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:46.678609 master-0 kubenswrapper[4430]: I1203 14:08:46.678573 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 14:08:46.698349 master-0 kubenswrapper[4430]: I1203 14:08:46.698311 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 14:08:46.726042 master-0 kubenswrapper[4430]: I1203 14:08:46.725997 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 14:08:46.738618 master-0 kubenswrapper[4430]: I1203 14:08:46.738587 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 14:08:46.757922 master-0 kubenswrapper[4430]: I1203 14:08:46.757593 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 03 14:08:46.777640 master-0 kubenswrapper[4430]: I1203 14:08:46.777599 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 14:08:46.810812 master-0 kubenswrapper[4430]: I1203 14:08:46.810771 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 14:08:46.818475 master-0 kubenswrapper[4430]: I1203 14:08:46.818407 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:08:46.838139 master-0 kubenswrapper[4430]: I1203 14:08:46.838087 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 14:08:46.859650 master-0 kubenswrapper[4430]: I1203 14:08:46.859610 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 14:08:46.879775 master-0 kubenswrapper[4430]: I1203 14:08:46.879718 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 14:08:46.899169 master-0 kubenswrapper[4430]: I1203 14:08:46.899078 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 14:08:46.918819 master-0 kubenswrapper[4430]: I1203 14:08:46.918768 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 14:08:46.938243 master-0 kubenswrapper[4430]: I1203 14:08:46.938012 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 14:08:46.959373 master-0 kubenswrapper[4430]: I1203 14:08:46.959299 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-n8h5v" Dec 03 14:08:46.978139 master-0 kubenswrapper[4430]: I1203 14:08:46.978099 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 14:08:46.998556 master-0 kubenswrapper[4430]: I1203 14:08:46.998514 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 14:08:47.018281 master-0 kubenswrapper[4430]: I1203 14:08:47.018238 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nqkqh" Dec 03 14:08:47.038892 master-0 kubenswrapper[4430]: I1203 14:08:47.038780 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 14:08:47.059901 master-0 kubenswrapper[4430]: I1203 14:08:47.059820 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 14:08:47.079573 master-0 kubenswrapper[4430]: I1203 14:08:47.079501 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 14:08:47.099145 master-0 kubenswrapper[4430]: I1203 14:08:47.099023 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 03 14:08:47.118998 master-0 kubenswrapper[4430]: I1203 14:08:47.118863 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 03 14:08:47.140066 master-0 kubenswrapper[4430]: I1203 14:08:47.140022 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 03 14:08:47.159230 master-0 kubenswrapper[4430]: I1203 14:08:47.159173 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 03 14:08:47.178582 master-0 kubenswrapper[4430]: I1203 14:08:47.178547 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 03 14:08:47.198194 master-0 kubenswrapper[4430]: I1203 14:08:47.198145 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" Dec 03 14:08:47.217994 master-0 kubenswrapper[4430]: I1203 14:08:47.217949 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 14:08:47.239755 master-0 kubenswrapper[4430]: I1203 14:08:47.239128 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8zh52" Dec 03 14:08:47.259181 master-0 kubenswrapper[4430]: I1203 14:08:47.259133 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 14:08:47.280451 master-0 kubenswrapper[4430]: I1203 14:08:47.280364 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 14:08:47.298597 master-0 kubenswrapper[4430]: I1203 14:08:47.298542 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 14:08:47.320296 master-0 kubenswrapper[4430]: I1203 14:08:47.320238 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 14:08:47.351805 master-0 kubenswrapper[4430]: I1203 14:08:47.350841 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 14:08:47.358643 master-0 kubenswrapper[4430]: I1203 14:08:47.358589 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 14:08:47.378510 master-0 kubenswrapper[4430]: I1203 14:08:47.378471 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 14:08:47.398575 master-0 kubenswrapper[4430]: I1203 14:08:47.398523 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 14:08:47.424892 master-0 kubenswrapper[4430]: I1203 14:08:47.424828 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 14:08:47.439809 master-0 kubenswrapper[4430]: I1203 14:08:47.439744 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 14:08:47.460272 master-0 kubenswrapper[4430]: I1203 14:08:47.460191 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 14:08:47.479113 master-0 kubenswrapper[4430]: I1203 14:08:47.479044 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 14:08:47.498295 master-0 kubenswrapper[4430]: I1203 14:08:47.498236 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 14:08:47.519955 master-0 kubenswrapper[4430]: I1203 14:08:47.519918 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 14:08:47.553358 master-0 kubenswrapper[4430]: I1203 14:08:47.553306 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:08:47.557651 master-0 kubenswrapper[4430]: I1203 14:08:47.557605 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:08:47.579000 master-0 kubenswrapper[4430]: I1203 14:08:47.578959 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:08:47.599054 master-0 kubenswrapper[4430]: I1203 14:08:47.599013 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:08:47.617315 master-0 kubenswrapper[4430]: I1203 14:08:47.617162 4430 request.go:700] Waited for 1.016376367s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Dec 03 14:08:47.619198 master-0 kubenswrapper[4430]: I1203 14:08:47.619175 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:08:47.646966 master-0 kubenswrapper[4430]: I1203 14:08:47.646882 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 14:08:47.658602 master-0 kubenswrapper[4430]: I1203 14:08:47.658554 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:08:47.673145 master-0 kubenswrapper[4430]: I1203 14:08:47.673049 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:47.673145 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:47.673145 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:47.673145 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:47.673494 master-0 kubenswrapper[4430]: I1203 14:08:47.673184 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:47.679770 master-0 kubenswrapper[4430]: I1203 14:08:47.679709 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:08:47.698913 master-0 kubenswrapper[4430]: I1203 14:08:47.698847 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 14:08:47.719162 master-0 kubenswrapper[4430]: I1203 14:08:47.719102 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 14:08:47.739623 master-0 kubenswrapper[4430]: I1203 14:08:47.739575 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 14:08:47.759544 master-0 kubenswrapper[4430]: I1203 14:08:47.759468 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:08:47.779603 master-0 kubenswrapper[4430]: I1203 14:08:47.779537 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 14:08:47.821748 master-0 kubenswrapper[4430]: I1203 14:08:47.821691 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 14:08:47.822549 master-0 kubenswrapper[4430]: I1203 14:08:47.822488 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 14:08:47.838106 master-0 kubenswrapper[4430]: I1203 14:08:47.838053 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 14:08:47.859412 master-0 kubenswrapper[4430]: I1203 14:08:47.859339 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 14:08:47.879874 master-0 kubenswrapper[4430]: I1203 14:08:47.879705 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 14:08:47.900095 master-0 kubenswrapper[4430]: I1203 14:08:47.900021 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:08:47.919451 master-0 kubenswrapper[4430]: I1203 14:08:47.919340 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:08:47.939442 master-0 kubenswrapper[4430]: I1203 14:08:47.939379 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 14:08:47.958155 master-0 kubenswrapper[4430]: I1203 14:08:47.958079 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:08:47.979783 master-0 kubenswrapper[4430]: I1203 14:08:47.979704 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:08:47.999678 master-0 kubenswrapper[4430]: I1203 14:08:47.999600 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:08:48.019769 master-0 kubenswrapper[4430]: I1203 14:08:48.019686 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 14:08:48.039290 master-0 kubenswrapper[4430]: I1203 14:08:48.039231 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 14:08:48.058173 master-0 kubenswrapper[4430]: I1203 14:08:48.058113 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 14:08:48.080368 master-0 kubenswrapper[4430]: I1203 14:08:48.080303 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 14:08:48.099312 master-0 kubenswrapper[4430]: I1203 14:08:48.099230 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-cb9jg" Dec 03 14:08:48.119110 master-0 kubenswrapper[4430]: I1203 14:08:48.119047 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 14:08:48.138622 master-0 kubenswrapper[4430]: I1203 14:08:48.138518 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 14:08:48.159195 master-0 kubenswrapper[4430]: I1203 14:08:48.159140 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 14:08:48.183316 master-0 kubenswrapper[4430]: I1203 14:08:48.178855 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 14:08:48.198760 master-0 kubenswrapper[4430]: I1203 14:08:48.198672 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 14:08:48.218445 master-0 kubenswrapper[4430]: I1203 14:08:48.218394 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 14:08:48.240003 master-0 kubenswrapper[4430]: I1203 14:08:48.239958 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 14:08:48.258955 master-0 kubenswrapper[4430]: I1203 14:08:48.258888 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 14:08:48.284935 master-0 kubenswrapper[4430]: I1203 14:08:48.284860 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 14:08:48.298161 master-0 kubenswrapper[4430]: I1203 14:08:48.298109 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bdlwz" Dec 03 14:08:48.318377 master-0 kubenswrapper[4430]: I1203 14:08:48.318330 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 14:08:48.338993 master-0 kubenswrapper[4430]: I1203 14:08:48.338917 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 03 14:08:48.358154 master-0 kubenswrapper[4430]: I1203 14:08:48.358078 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 03 14:08:48.386138 master-0 kubenswrapper[4430]: I1203 14:08:48.386081 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 14:08:48.398402 master-0 kubenswrapper[4430]: I1203 14:08:48.398328 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 14:08:48.417768 master-0 kubenswrapper[4430]: I1203 14:08:48.417720 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 14:08:48.437790 master-0 kubenswrapper[4430]: I1203 14:08:48.437761 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 14:08:48.457998 master-0 kubenswrapper[4430]: I1203 14:08:48.457947 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 14:08:48.478197 master-0 kubenswrapper[4430]: I1203 14:08:48.478158 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 14:08:48.498983 master-0 kubenswrapper[4430]: I1203 14:08:48.498925 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 14:08:48.519897 master-0 kubenswrapper[4430]: I1203 14:08:48.519828 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 14:08:48.673325 master-0 kubenswrapper[4430]: I1203 14:08:48.673117 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:48.673325 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:48.673325 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:48.673325 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:48.673325 master-0 kubenswrapper[4430]: I1203 14:08:48.673177 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:49.673808 master-0 kubenswrapper[4430]: I1203 14:08:49.673741 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:49.673808 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:49.673808 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:49.673808 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:49.674503 master-0 kubenswrapper[4430]: I1203 14:08:49.673841 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:50.673696 master-0 kubenswrapper[4430]: I1203 14:08:50.673633 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:50.673696 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:50.673696 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:50.673696 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:50.674231 master-0 kubenswrapper[4430]: I1203 14:08:50.673727 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:50.844651 master-0 kubenswrapper[4430]: I1203 14:08:50.844584 4430 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Dec 03 14:08:51.674128 master-0 kubenswrapper[4430]: I1203 14:08:51.674065 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:51.674128 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:51.674128 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:51.674128 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:51.674682 master-0 kubenswrapper[4430]: I1203 14:08:51.674144 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:52.500979 master-0 kubenswrapper[4430]: I1203 14:08:52.500922 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.501053 master-0 kubenswrapper[4430]: I1203 14:08:52.501026 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.501154 master-0 kubenswrapper[4430]: I1203 14:08:52.501118 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.501452 master-0 kubenswrapper[4430]: I1203 14:08:52.501210 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.501452 master-0 kubenswrapper[4430]: I1203 14:08:52.501256 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.501452 master-0 kubenswrapper[4430]: I1203 14:08:52.501299 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.501452 master-0 kubenswrapper[4430]: I1203 14:08:52.501364 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.501452 master-0 kubenswrapper[4430]: I1203 14:08:52.501410 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.501650 master-0 kubenswrapper[4430]: I1203 14:08:52.501470 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.501650 master-0 kubenswrapper[4430]: I1203 14:08:52.501537 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:52.501650 master-0 kubenswrapper[4430]: I1203 14:08:52.501576 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.501650 master-0 kubenswrapper[4430]: I1203 14:08:52.501613 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.501808 master-0 kubenswrapper[4430]: I1203 14:08:52.501653 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.501808 master-0 kubenswrapper[4430]: I1203 14:08:52.501694 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.501808 master-0 kubenswrapper[4430]: I1203 14:08:52.501738 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.501808 master-0 kubenswrapper[4430]: I1203 14:08:52.501782 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501822 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501861 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501899 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501935 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501996 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.501962 master-0 kubenswrapper[4430]: I1203 14:08:52.501996 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502035 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502097 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502137 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502177 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502212 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502231 4430 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 14:08:52.502286 master-0 kubenswrapper[4430]: I1203 14:08:52.502251 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502292 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502331 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502369 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502408 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502475 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:52.502599 master-0 kubenswrapper[4430]: I1203 14:08:52.502534 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502631 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502641 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502672 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502696 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502731 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502771 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502811 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:52.502902 master-0 kubenswrapper[4430]: I1203 14:08:52.502849 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.502911 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.502977 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503013 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503050 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503112 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503150 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503187 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503227 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:52.503275 master-0 kubenswrapper[4430]: I1203 14:08:52.503265 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503338 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503350 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503380 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503442 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503484 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503523 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503563 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503568 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.503685 master-0 kubenswrapper[4430]: I1203 14:08:52.503603 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503787 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503823 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503829 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503852 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503884 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503955 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.503990 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.504022 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.504063 master-0 kubenswrapper[4430]: I1203 14:08:52.504068 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504100 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504134 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504165 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504207 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504239 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504266 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504308 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504339 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504381 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.502252 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504448 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504494 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:52.504534 master-0 kubenswrapper[4430]: I1203 14:08:52.504527 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504555 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504582 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504614 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504643 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504669 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504697 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504727 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504761 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504791 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504842 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504874 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504907 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504934 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504964 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.504990 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.505019 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.505063 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.505088 master-0 kubenswrapper[4430]: I1203 14:08:52.505091 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505116 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505144 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505174 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505190 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505200 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505291 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505473 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505534 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505619 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505699 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505762 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505804 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505841 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505849 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505881 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.505967 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506007 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506097 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506233 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506272 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506310 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506373 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:52.506410 master-0 kubenswrapper[4430]: I1203 14:08:52.506412 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506514 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506574 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506613 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506665 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506751 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506791 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506831 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506871 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506907 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506945 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.506981 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507019 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507057 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507095 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507121 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507137 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507172 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:52.507274 master-0 kubenswrapper[4430]: I1203 14:08:52.507230 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507289 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507341 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507380 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507453 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507521 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507566 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507606 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507664 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507704 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507744 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.507830 master-0 kubenswrapper[4430]: I1203 14:08:52.507786 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.507863 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.507921 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.507972 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.508036 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.508052 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.508114 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.508131 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.508238 master-0 kubenswrapper[4430]: I1203 14:08:52.508185 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.508470 master-0 kubenswrapper[4430]: I1203 14:08:52.508240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.508470 master-0 kubenswrapper[4430]: I1203 14:08:52.508276 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.508470 master-0 kubenswrapper[4430]: I1203 14:08:52.508295 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.508470 master-0 kubenswrapper[4430]: I1203 14:08:52.508345 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.508470 master-0 kubenswrapper[4430]: I1203 14:08:52.508360 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508466 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508514 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508542 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508566 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508590 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.508624 master-0 kubenswrapper[4430]: I1203 14:08:52.508611 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.508801 master-0 kubenswrapper[4430]: I1203 14:08:52.508638 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:52.508801 master-0 kubenswrapper[4430]: I1203 14:08:52.508672 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:52.508801 master-0 kubenswrapper[4430]: I1203 14:08:52.508694 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.509058 master-0 kubenswrapper[4430]: I1203 14:08:52.508998 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.509122 master-0 kubenswrapper[4430]: I1203 14:08:52.509082 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.509320 master-0 kubenswrapper[4430]: I1203 14:08:52.509290 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:52.509433 master-0 kubenswrapper[4430]: I1203 14:08:52.509391 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:52.509914 master-0 kubenswrapper[4430]: I1203 14:08:52.507460 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:52.510045 master-0 kubenswrapper[4430]: I1203 14:08:52.509993 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:52.510977 master-0 kubenswrapper[4430]: I1203 14:08:52.510527 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.510977 master-0 kubenswrapper[4430]: I1203 14:08:52.505852 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.510977 master-0 kubenswrapper[4430]: I1203 14:08:52.510899 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:52.511542 master-0 kubenswrapper[4430]: I1203 14:08:52.511158 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.511821 master-0 kubenswrapper[4430]: I1203 14:08:52.511796 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.512303 master-0 kubenswrapper[4430]: I1203 14:08:52.512272 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.512508 master-0 kubenswrapper[4430]: I1203 14:08:52.512473 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:52.512571 master-0 kubenswrapper[4430]: I1203 14:08:52.512486 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:52.513831 master-0 kubenswrapper[4430]: I1203 14:08:52.513793 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.513942 master-0 kubenswrapper[4430]: I1203 14:08:52.513908 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:52.513996 master-0 kubenswrapper[4430]: I1203 14:08:52.513973 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.514028 master-0 kubenswrapper[4430]: I1203 14:08:52.513987 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.514315 master-0 kubenswrapper[4430]: I1203 14:08:52.514282 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.514536 master-0 kubenswrapper[4430]: I1203 14:08:52.514506 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.514597 master-0 kubenswrapper[4430]: I1203 14:08:52.514515 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:52.514774 master-0 kubenswrapper[4430]: I1203 14:08:52.514744 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.514963 master-0 kubenswrapper[4430]: I1203 14:08:52.514937 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:52.515008 master-0 kubenswrapper[4430]: I1203 14:08:52.514986 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:52.515519 master-0 kubenswrapper[4430]: I1203 14:08:52.515395 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.516309 master-0 kubenswrapper[4430]: I1203 14:08:52.516272 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.516769 master-0 kubenswrapper[4430]: I1203 14:08:52.516745 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:52.517159 master-0 kubenswrapper[4430]: I1203 14:08:52.517129 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.517765 master-0 kubenswrapper[4430]: I1203 14:08:52.517733 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.517901 master-0 kubenswrapper[4430]: I1203 14:08:52.517881 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.518302 master-0 kubenswrapper[4430]: I1203 14:08:52.518276 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.518681 master-0 kubenswrapper[4430]: I1203 14:08:52.518502 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.518850 master-0 kubenswrapper[4430]: I1203 14:08:52.518797 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.519042 master-0 kubenswrapper[4430]: I1203 14:08:52.519017 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.519146 master-0 kubenswrapper[4430]: I1203 14:08:52.519103 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.519308 master-0 kubenswrapper[4430]: I1203 14:08:52.519284 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.519850 master-0 kubenswrapper[4430]: I1203 14:08:52.519800 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.519850 master-0 kubenswrapper[4430]: I1203 14:08:52.519843 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.519950 master-0 kubenswrapper[4430]: I1203 14:08:52.519824 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.520627 master-0 kubenswrapper[4430]: I1203 14:08:52.520603 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.520690 master-0 kubenswrapper[4430]: I1203 14:08:52.520649 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.520964 master-0 kubenswrapper[4430]: I1203 14:08:52.520934 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.521251 master-0 kubenswrapper[4430]: I1203 14:08:52.521219 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.521372 master-0 kubenswrapper[4430]: I1203 14:08:52.521315 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.521715 master-0 kubenswrapper[4430]: I1203 14:08:52.521563 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.521784 master-0 kubenswrapper[4430]: I1203 14:08:52.521754 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:52.522073 master-0 kubenswrapper[4430]: I1203 14:08:52.522025 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.522299 master-0 kubenswrapper[4430]: I1203 14:08:52.522252 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"multus-admission-controller-5bdcc987c4-x99xc\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:52.522711 master-0 kubenswrapper[4430]: I1203 14:08:52.522674 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:52.522777 master-0 kubenswrapper[4430]: I1203 14:08:52.522756 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.522815 master-0 kubenswrapper[4430]: I1203 14:08:52.522786 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.522890 master-0 kubenswrapper[4430]: I1203 14:08:52.522864 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:52.522976 master-0 kubenswrapper[4430]: I1203 14:08:52.522949 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:52.523376 master-0 kubenswrapper[4430]: I1203 14:08:52.523247 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.523376 master-0 kubenswrapper[4430]: I1203 14:08:52.523031 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:52.523552 master-0 kubenswrapper[4430]: I1203 14:08:52.523528 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:52.524859 master-0 kubenswrapper[4430]: I1203 14:08:52.524804 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.527476 master-0 kubenswrapper[4430]: I1203 14:08:52.526517 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.527476 master-0 kubenswrapper[4430]: I1203 14:08:52.526987 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:52.527476 master-0 kubenswrapper[4430]: I1203 14:08:52.527159 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.527801 master-0 kubenswrapper[4430]: I1203 14:08:52.527658 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:52.528040 master-0 kubenswrapper[4430]: I1203 14:08:52.527966 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:52.528096 master-0 kubenswrapper[4430]: I1203 14:08:52.528048 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:52.528700 master-0 kubenswrapper[4430]: I1203 14:08:52.528666 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.529177 master-0 kubenswrapper[4430]: I1203 14:08:52.529094 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:52.529264 master-0 kubenswrapper[4430]: I1203 14:08:52.529189 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:52.529959 master-0 kubenswrapper[4430]: I1203 14:08:52.529934 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:52.530060 master-0 kubenswrapper[4430]: I1203 14:08:52.529098 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530245 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530296 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530452 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530543 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530595 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.530788 master-0 kubenswrapper[4430]: I1203 14:08:52.530579 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.531033 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.531148 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.531182 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.531284 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.532062 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.532699 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:52.532730 master-0 kubenswrapper[4430]: I1203 14:08:52.532732 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.533703 master-0 kubenswrapper[4430]: I1203 14:08:52.533166 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.533703 master-0 kubenswrapper[4430]: I1203 14:08:52.533175 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.533703 master-0 kubenswrapper[4430]: I1203 14:08:52.533400 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.534256 master-0 kubenswrapper[4430]: I1203 14:08:52.533753 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:52.534256 master-0 kubenswrapper[4430]: I1203 14:08:52.533791 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.534256 master-0 kubenswrapper[4430]: I1203 14:08:52.533932 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534446 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534453 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534555 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534608 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534716 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.534838 master-0 kubenswrapper[4430]: I1203 14:08:52.534715 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.535138 master-0 kubenswrapper[4430]: I1203 14:08:52.535099 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.535603 master-0 kubenswrapper[4430]: I1203 14:08:52.535392 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.535603 master-0 kubenswrapper[4430]: I1203 14:08:52.535562 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:52.535966 master-0 kubenswrapper[4430]: I1203 14:08:52.535889 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.536347 master-0 kubenswrapper[4430]: I1203 14:08:52.536300 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:52.537480 master-0 kubenswrapper[4430]: I1203 14:08:52.537448 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:52.537718 master-0 kubenswrapper[4430]: I1203 14:08:52.537678 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.537972 master-0 kubenswrapper[4430]: I1203 14:08:52.537919 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.537972 master-0 kubenswrapper[4430]: I1203 14:08:52.537943 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:52.538281 master-0 kubenswrapper[4430]: I1203 14:08:52.538256 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.538336 master-0 kubenswrapper[4430]: I1203 14:08:52.538309 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:52.538402 master-0 kubenswrapper[4430]: I1203 14:08:52.538313 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:52.538402 master-0 kubenswrapper[4430]: I1203 14:08:52.538375 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538637 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538642 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538719 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538797 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538952 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.538979 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539242 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539374 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539382 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539491 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539554 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539649 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539658 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539681 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539787 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539826 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539793 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539869 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.539983 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.540116 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.540132 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:52.540195 master-0 kubenswrapper[4430]: I1203 14:08:52.540136 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.540398 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.540657 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.540796 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.541399 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.542065 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.542100 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.542382 master-0 kubenswrapper[4430]: I1203 14:08:52.542291 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:52.542920 master-0 kubenswrapper[4430]: I1203 14:08:52.542482 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:52.542920 master-0 kubenswrapper[4430]: I1203 14:08:52.542513 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:52.542920 master-0 kubenswrapper[4430]: I1203 14:08:52.542905 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"alertmanager-main-0\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.543180 master-0 kubenswrapper[4430]: I1203 14:08:52.543144 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:52.543263 master-0 kubenswrapper[4430]: I1203 14:08:52.543204 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"prometheus-k8s-0\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.543263 master-0 kubenswrapper[4430]: I1203 14:08:52.543230 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:52.543330 master-0 kubenswrapper[4430]: I1203 14:08:52.543288 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:52.543474 master-0 kubenswrapper[4430]: I1203 14:08:52.543447 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:52.543644 master-0 kubenswrapper[4430]: I1203 14:08:52.543614 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:52.543735 master-0 kubenswrapper[4430]: I1203 14:08:52.543704 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:52.544002 master-0 kubenswrapper[4430]: I1203 14:08:52.543971 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:52.550903 master-0 kubenswrapper[4430]: I1203 14:08:52.550873 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:52.568977 master-0 kubenswrapper[4430]: I1203 14:08:52.568933 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:08:52.602357 master-0 kubenswrapper[4430]: I1203 14:08:52.602284 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:52.606316 master-0 kubenswrapper[4430]: I1203 14:08:52.606205 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:52.632836 master-0 kubenswrapper[4430]: I1203 14:08:52.628870 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:08:52.652173 master-0 kubenswrapper[4430]: I1203 14:08:52.651464 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:08:52.659744 master-0 kubenswrapper[4430]: I1203 14:08:52.657858 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:08:52.674708 master-0 kubenswrapper[4430]: I1203 14:08:52.674460 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:52.674708 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:52.674708 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:52.674708 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:52.674708 master-0 kubenswrapper[4430]: I1203 14:08:52.674548 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:52.694475 master-0 kubenswrapper[4430]: I1203 14:08:52.693914 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:08:52.727341 master-0 kubenswrapper[4430]: I1203 14:08:52.712925 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:08:52.750098 master-0 kubenswrapper[4430]: I1203 14:08:52.745013 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:08:52.750098 master-0 kubenswrapper[4430]: I1203 14:08:52.748174 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:08:52.757634 master-0 kubenswrapper[4430]: I1203 14:08:52.754788 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:08:52.778986 master-0 kubenswrapper[4430]: I1203 14:08:52.778888 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:52.783608 master-0 kubenswrapper[4430]: I1203 14:08:52.783520 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:08:52.796972 master-0 kubenswrapper[4430]: I1203 14:08:52.796878 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:08:52.797462 master-0 kubenswrapper[4430]: I1203 14:08:52.797437 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:08:52.811983 master-0 kubenswrapper[4430]: I1203 14:08:52.811914 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:08:52.830648 master-0 kubenswrapper[4430]: I1203 14:08:52.830568 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:08:53.228698 master-0 kubenswrapper[4430]: I1203 14:08:53.228645 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228708 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228759 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228787 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228813 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228835 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228862 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228886 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228907 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:53.228938 master-0 kubenswrapper[4430]: I1203 14:08:53.228945 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.228968 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.229007 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.229195 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.229232 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.229258 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:53.229345 master-0 kubenswrapper[4430]: I1203 14:08:53.229283 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.229469 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.229557 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.230226 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.230266 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.230294 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.230317 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:53.230347 master-0 kubenswrapper[4430]: I1203 14:08:53.230338 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230383 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230408 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230451 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230481 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230512 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:53.230666 master-0 kubenswrapper[4430]: I1203 14:08:53.230536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:53.235166 master-0 kubenswrapper[4430]: I1203 14:08:53.235133 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:53.235247 master-0 kubenswrapper[4430]: I1203 14:08:53.235200 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:53.235758 master-0 kubenswrapper[4430]: I1203 14:08:53.235723 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:53.236482 master-0 kubenswrapper[4430]: I1203 14:08:53.236403 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:53.237013 master-0 kubenswrapper[4430]: I1203 14:08:53.236975 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:53.237600 master-0 kubenswrapper[4430]: I1203 14:08:53.237566 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:53.237786 master-0 kubenswrapper[4430]: I1203 14:08:53.237744 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:53.238979 master-0 kubenswrapper[4430]: I1203 14:08:53.238845 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:53.239306 master-0 kubenswrapper[4430]: I1203 14:08:53.239279 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:53.240517 master-0 kubenswrapper[4430]: I1203 14:08:53.240002 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:53.240731 master-0 kubenswrapper[4430]: I1203 14:08:53.240590 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:53.240731 master-0 kubenswrapper[4430]: I1203 14:08:53.240670 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:53.245306 master-0 kubenswrapper[4430]: I1203 14:08:53.245258 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:53.245904 master-0 kubenswrapper[4430]: I1203 14:08:53.245857 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:53.247311 master-0 kubenswrapper[4430]: I1203 14:08:53.247261 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:53.247390 master-0 kubenswrapper[4430]: I1203 14:08:53.247306 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:53.247390 master-0 kubenswrapper[4430]: I1203 14:08:53.247334 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:53.247877 master-0 kubenswrapper[4430]: I1203 14:08:53.247832 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:53.248025 master-0 kubenswrapper[4430]: I1203 14:08:53.247948 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:53.249994 master-0 kubenswrapper[4430]: I1203 14:08:53.249963 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:53.251987 master-0 kubenswrapper[4430]: I1203 14:08:53.251952 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:53.252196 master-0 kubenswrapper[4430]: I1203 14:08:53.252153 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:53.252517 master-0 kubenswrapper[4430]: I1203 14:08:53.252478 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:53.252602 master-0 kubenswrapper[4430]: I1203 14:08:53.252517 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:53.253711 master-0 kubenswrapper[4430]: I1203 14:08:53.253659 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:53.253933 master-0 kubenswrapper[4430]: I1203 14:08:53.253904 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:53.254158 master-0 kubenswrapper[4430]: I1203 14:08:53.254128 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:53.254247 master-0 kubenswrapper[4430]: I1203 14:08:53.254220 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:53.254866 master-0 kubenswrapper[4430]: I1203 14:08:53.254830 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:53.257390 master-0 kubenswrapper[4430]: I1203 14:08:53.257193 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:08:53.258216 master-0 kubenswrapper[4430]: I1203 14:08:53.258186 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:53.258907 master-0 kubenswrapper[4430]: I1203 14:08:53.258857 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:53.260194 master-0 kubenswrapper[4430]: I1203 14:08:53.260144 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:53.260901 master-0 kubenswrapper[4430]: I1203 14:08:53.260589 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:08:53.275706 master-0 kubenswrapper[4430]: I1203 14:08:53.275632 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:08:53.276611 master-0 kubenswrapper[4430]: I1203 14:08:53.276523 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:08:53.290914 master-0 kubenswrapper[4430]: I1203 14:08:53.290851 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:08:53.293799 master-0 kubenswrapper[4430]: I1203 14:08:53.293745 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:08:53.303660 master-0 kubenswrapper[4430]: I1203 14:08:53.303564 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:08:53.303871 master-0 kubenswrapper[4430]: I1203 14:08:53.303792 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:08:53.313180 master-0 kubenswrapper[4430]: I1203 14:08:53.313120 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:08:53.321761 master-0 kubenswrapper[4430]: I1203 14:08:53.321714 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:08:53.325184 master-0 kubenswrapper[4430]: I1203 14:08:53.325142 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:08:53.336219 master-0 kubenswrapper[4430]: I1203 14:08:53.335948 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:53.340673 master-0 kubenswrapper[4430]: I1203 14:08:53.340139 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:08:53.340673 master-0 kubenswrapper[4430]: I1203 14:08:53.340284 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:08:53.360148 master-0 kubenswrapper[4430]: I1203 14:08:53.359324 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:08:53.360148 master-0 kubenswrapper[4430]: I1203 14:08:53.359820 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:53.369661 master-0 kubenswrapper[4430]: I1203 14:08:53.369632 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:08:53.372117 master-0 kubenswrapper[4430]: I1203 14:08:53.371331 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:08:53.376101 master-0 kubenswrapper[4430]: W1203 14:08:53.376046 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb02244d0_f4ef_4702_950d_9e3fb5ced128.slice/crio-4c0305314a4b45e5414c789dafb334fadb544a48e9c0ed7698b1e52caf421940 WatchSource:0}: Error finding container 4c0305314a4b45e5414c789dafb334fadb544a48e9c0ed7698b1e52caf421940: Status 404 returned error can't find the container with id 4c0305314a4b45e5414c789dafb334fadb544a48e9c0ed7698b1e52caf421940 Dec 03 14:08:53.386462 master-0 kubenswrapper[4430]: W1203 14:08:53.386407 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22673f47_9484_4eed_bbce_888588c754ed.slice/crio-167491accbcc7761df1a93bd8d4d7ce925c43643ae8d917bb763188fd267db1b WatchSource:0}: Error finding container 167491accbcc7761df1a93bd8d4d7ce925c43643ae8d917bb763188fd267db1b: Status 404 returned error can't find the container with id 167491accbcc7761df1a93bd8d4d7ce925c43643ae8d917bb763188fd267db1b Dec 03 14:08:53.402473 master-0 kubenswrapper[4430]: I1203 14:08:53.393604 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:08:53.446111 master-0 kubenswrapper[4430]: I1203 14:08:53.440091 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:08:53.446111 master-0 kubenswrapper[4430]: I1203 14:08:53.442341 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:53.446111 master-0 kubenswrapper[4430]: I1203 14:08:53.442745 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:08:53.446111 master-0 kubenswrapper[4430]: I1203 14:08:53.443012 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:08:53.459696 master-0 kubenswrapper[4430]: I1203 14:08:53.459655 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:08:53.472029 master-0 kubenswrapper[4430]: I1203 14:08:53.471597 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:08:53.482665 master-0 kubenswrapper[4430]: I1203 14:08:53.478789 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:08:53.489786 master-0 kubenswrapper[4430]: I1203 14:08:53.489750 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:08:53.513941 master-0 kubenswrapper[4430]: I1203 14:08:53.513899 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:08:53.519159 master-0 kubenswrapper[4430]: I1203 14:08:53.519103 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:08:53.535719 master-0 kubenswrapper[4430]: I1203 14:08:53.535672 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:08:53.747063 master-0 kubenswrapper[4430]: I1203 14:08:53.746960 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:53.747063 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:53.747063 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:53.747063 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:53.748900 master-0 kubenswrapper[4430]: I1203 14:08:53.747082 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:53.809354 master-0 kubenswrapper[4430]: W1203 14:08:53.809304 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3c1ebb9_f052_410b_a999_45e9b75b0e58.slice/crio-a3044ef343d5d6aaa40cc876cfa7c96186196d77b07378c6017e11610dde30c4 WatchSource:0}: Error finding container a3044ef343d5d6aaa40cc876cfa7c96186196d77b07378c6017e11610dde30c4: Status 404 returned error can't find the container with id a3044ef343d5d6aaa40cc876cfa7c96186196d77b07378c6017e11610dde30c4 Dec 03 14:08:53.816220 master-0 kubenswrapper[4430]: W1203 14:08:53.816110 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4df2889c_99f7_402a_9d50_18ccf427179c.slice/crio-4b1cb97e1db3d78a538e0bea720663b9fdcfe94f64a9a8d92b98cf58caa70051 WatchSource:0}: Error finding container 4b1cb97e1db3d78a538e0bea720663b9fdcfe94f64a9a8d92b98cf58caa70051: Status 404 returned error can't find the container with id 4b1cb97e1db3d78a538e0bea720663b9fdcfe94f64a9a8d92b98cf58caa70051 Dec 03 14:08:54.041092 master-0 kubenswrapper[4430]: I1203 14:08:54.041042 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"0ad92ee62b63b58ee757b6a1515faa17afbf7c89d11d0cd93a8fd0ba103cd2fd"} Dec 03 14:08:54.043020 master-0 kubenswrapper[4430]: I1203 14:08:54.042624 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"4c0305314a4b45e5414c789dafb334fadb544a48e9c0ed7698b1e52caf421940"} Dec 03 14:08:54.044473 master-0 kubenswrapper[4430]: I1203 14:08:54.044430 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"a3044ef343d5d6aaa40cc876cfa7c96186196d77b07378c6017e11610dde30c4"} Dec 03 14:08:54.045813 master-0 kubenswrapper[4430]: I1203 14:08:54.045785 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"167491accbcc7761df1a93bd8d4d7ce925c43643ae8d917bb763188fd267db1b"} Dec 03 14:08:54.048902 master-0 kubenswrapper[4430]: I1203 14:08:54.048871 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"19c3142e042cfc5fe17472605f8a4f809cf671dbe4d74f555546128f6ed0f46c"} Dec 03 14:08:54.050876 master-0 kubenswrapper[4430]: I1203 14:08:54.050820 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"4b1cb97e1db3d78a538e0bea720663b9fdcfe94f64a9a8d92b98cf58caa70051"} Dec 03 14:08:54.054231 master-0 kubenswrapper[4430]: I1203 14:08:54.054171 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"08ced1c3618fdc83fc2e72fb6738f0de722b94b0761b7be5638a318aed1b0c8a"} Dec 03 14:08:54.054359 master-0 kubenswrapper[4430]: I1203 14:08:54.054250 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"fde44d2f5f0d3ffa96138149beb87e962fc098b20de335ba6bc259e4a6260c13"} Dec 03 14:08:54.054359 master-0 kubenswrapper[4430]: I1203 14:08:54.054271 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"ce56d1eeb4e0c98d7b6f59d7815f0c9d9d64ccd041aeaa4ba7992737600f42fa"} Dec 03 14:08:54.056242 master-0 kubenswrapper[4430]: I1203 14:08:54.056196 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"b86c403eae06b2512d9ab63dd5d7ee4c866d04452f503f67206ce9e9cd5551d3"} Dec 03 14:08:54.059352 master-0 kubenswrapper[4430]: I1203 14:08:54.059315 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"b3c8f5d4f39beb3b9fb9f00d817f68bb3f8dc76e7a783e62c497f6ac9be62874"} Dec 03 14:08:54.059453 master-0 kubenswrapper[4430]: I1203 14:08:54.059357 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"1e55c30a36d3e0a0bc5db2e09355a339d57f3e1071b63c609f21b52aa62c7d73"} Dec 03 14:08:54.787153 master-0 kubenswrapper[4430]: I1203 14:08:54.785743 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:54.787153 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:54.787153 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:54.787153 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:54.787153 master-0 kubenswrapper[4430]: I1203 14:08:54.785799 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891039 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891135 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891165 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891189 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891214 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891240 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891268 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891297 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891346 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: E1203 14:08:54.891347 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: E1203 14:08:54.891402 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: E1203 14:08:54.891496 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:09:26.891466618 +0000 UTC m=+67.514380684 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891370 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891592 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891656 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891696 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891741 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891776 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891798 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891823 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:54.892671 master-0 kubenswrapper[4430]: I1203 14:08:54.891846 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.895570 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.895843 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.895724 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.899179 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.899226 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.899291 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"controller-manager-78d987764b-xcs5w\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:54.899440 master-0 kubenswrapper[4430]: I1203 14:08:54.899354 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:54.899805 master-0 kubenswrapper[4430]: I1203 14:08:54.899742 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"console-648d88c756-vswh8\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:54.900748 master-0 kubenswrapper[4430]: I1203 14:08:54.900005 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:54.900748 master-0 kubenswrapper[4430]: I1203 14:08:54.900300 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"route-controller-manager-678c7f799b-4b7nv\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:54.900748 master-0 kubenswrapper[4430]: I1203 14:08:54.900410 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:54.900748 master-0 kubenswrapper[4430]: I1203 14:08:54.900732 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:54.901113 master-0 kubenswrapper[4430]: I1203 14:08:54.901068 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:54.901332 master-0 kubenswrapper[4430]: I1203 14:08:54.901286 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:54.901624 master-0 kubenswrapper[4430]: I1203 14:08:54.901536 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:54.902742 master-0 kubenswrapper[4430]: I1203 14:08:54.902685 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:54.903866 master-0 kubenswrapper[4430]: I1203 14:08:54.903790 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"console-c5d7cd7f9-2hp75\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:54.910178 master-0 kubenswrapper[4430]: I1203 14:08:54.909609 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:54.942826 master-0 kubenswrapper[4430]: I1203 14:08:54.942763 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:08:54.996459 master-0 kubenswrapper[4430]: I1203 14:08:54.993678 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:08:54.996459 master-0 kubenswrapper[4430]: I1203 14:08:54.994247 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:08:55.030456 master-0 kubenswrapper[4430]: I1203 14:08:55.023255 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:08:55.038928 master-0 kubenswrapper[4430]: I1203 14:08:55.035106 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.099650 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.101180 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.104964 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.117137 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.118460 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.117134 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.118980 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.126963 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:08:55.134738 master-0 kubenswrapper[4430]: I1203 14:08:55.128699 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:08:55.145727 master-0 kubenswrapper[4430]: I1203 14:08:55.145603 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"6bc8cd637d6abcf3106be597377869fe944925e079fd871be16176853ffc3c2f"} Dec 03 14:08:55.146701 master-0 kubenswrapper[4430]: I1203 14:08:55.146677 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"1e4691f184c057319c4a4f04cbad73cc8b1fa6e14c298925e074539de10838cf"} Dec 03 14:08:55.159482 master-0 kubenswrapper[4430]: I1203 14:08:55.159448 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:08:55.167182 master-0 kubenswrapper[4430]: I1203 14:08:55.167072 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"6763556ae5d3c40350287a7a36e5f8ecd1ff5403f56b53c354aec326ec8d1a4c"} Dec 03 14:08:55.167182 master-0 kubenswrapper[4430]: I1203 14:08:55.167149 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"bae107df9b3a9e762d5264f84db57403126e7f9a8cc1809fa06f7e1b7657108c"} Dec 03 14:08:55.173784 master-0 kubenswrapper[4430]: I1203 14:08:55.173710 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"54fcc5e0aa1673e2b6dad85f94502aca754cb295fd40bedc5379c8fda8df96bb"} Dec 03 14:08:55.176839 master-0 kubenswrapper[4430]: I1203 14:08:55.176776 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"61126e75118cd703f736f8e6a53eb486ebd409a780c539f465c42e6ca41a0ec7"} Dec 03 14:08:55.181704 master-0 kubenswrapper[4430]: I1203 14:08:55.180350 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"8a576ad2831821ea4b6d5602aa70010b115a4dc548df83ef9a7f154f24e78877"} Dec 03 14:08:55.181704 master-0 kubenswrapper[4430]: I1203 14:08:55.180915 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:08:55.181704 master-0 kubenswrapper[4430]: I1203 14:08:55.181020 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:08:55.189984 master-0 kubenswrapper[4430]: I1203 14:08:55.188200 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"17a5cd44c392850f434f61e5e79c39571ac458da606d60e15fa99372bc690af8"} Dec 03 14:08:55.189984 master-0 kubenswrapper[4430]: I1203 14:08:55.188266 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"3daf9805df4755245acb16c6bd08862ac8da135d42b0fb2123ea52e5c7601eea"} Dec 03 14:08:55.200725 master-0 kubenswrapper[4430]: I1203 14:08:55.191550 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"d079cea270d0cfe6e45724c631ac62ac89c7a513e0ce5e9badad54e53fa31429"} Dec 03 14:08:55.200725 master-0 kubenswrapper[4430]: I1203 14:08:55.192645 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:55.200725 master-0 kubenswrapper[4430]: I1203 14:08:55.195261 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"cf68930d8c87e8957e8fdbba5d623639f91d1b1a3d9d121398a783e96e5e3961"} Dec 03 14:08:55.200725 master-0 kubenswrapper[4430]: I1203 14:08:55.195300 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerStarted","Data":"d91ef0ad78b6221abcedb3e08cbd0af37a4ae5c5da50c245215f454652d8185e"} Dec 03 14:08:55.213443 master-0 kubenswrapper[4430]: I1203 14:08:55.208328 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"5576bcab25f532d5c23561da9740c9c060d5652c2977da1a48a406483a3a2e71"} Dec 03 14:08:55.222260 master-0 kubenswrapper[4430]: I1203 14:08:55.221620 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"b47333ee9ba830a9af683a9b0c9324b71e1b375dad853a63c54e3c6a8cb148a6"} Dec 03 14:08:55.222260 master-0 kubenswrapper[4430]: I1203 14:08:55.221677 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"72559983816bdf6dfe237948f445318c22392716c2ae4897c16037196621efe1"} Dec 03 14:08:55.222260 master-0 kubenswrapper[4430]: I1203 14:08:55.221698 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"52313725c2028dfbc2904f160605184b179c485f5bc888ed26c9c71dd42dc37c"} Dec 03 14:08:55.235146 master-0 kubenswrapper[4430]: I1203 14:08:55.226791 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:08:55.241758 master-0 kubenswrapper[4430]: I1203 14:08:55.236509 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" exitCode=0 Dec 03 14:08:55.241758 master-0 kubenswrapper[4430]: I1203 14:08:55.236585 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} Dec 03 14:08:55.825456 master-0 kubenswrapper[4430]: I1203 14:08:55.825232 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:55.825456 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:55.825456 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:55.825456 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:55.825456 master-0 kubenswrapper[4430]: I1203 14:08:55.825313 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:56.286775 master-0 kubenswrapper[4430]: I1203 14:08:56.286189 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"e2226b1e6fbbce79a23d04c546c8f3a797f1ed00bbf04ce53e482ad645f13380"} Dec 03 14:08:56.317747 master-0 kubenswrapper[4430]: I1203 14:08:56.317259 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"b0931e25643c6197e39f0aa7ba9cfa54a15bcc3be73426bc3ca04a17ebfb56fb"} Dec 03 14:08:56.318787 master-0 kubenswrapper[4430]: I1203 14:08:56.318764 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:56.351609 master-0 kubenswrapper[4430]: I1203 14:08:56.335078 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:08:56.351609 master-0 kubenswrapper[4430]: I1203 14:08:56.341309 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"767861460b1867cff246649b385d704cfc2138843597fedbe28a37ed6a1fcc6b"} Dec 03 14:08:56.351609 master-0 kubenswrapper[4430]: I1203 14:08:56.341442 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"a65945534e8a6e7e613e7575f9eddf8fc5c8e529121d27912d57159ce4d4ff67"} Dec 03 14:08:56.363596 master-0 kubenswrapper[4430]: I1203 14:08:56.363542 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"c27616ee97a233440b469b407debbcfb798dea5820539899850ae5f5e5b89175"} Dec 03 14:08:56.395368 master-0 kubenswrapper[4430]: I1203 14:08:56.385213 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"06d9e21dc5026b2ff8ed704174dd5237e899ac63961af3e8259a610d962004eb"} Dec 03 14:08:56.395368 master-0 kubenswrapper[4430]: I1203 14:08:56.385263 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"ec5722f5529eb51f3812e83ffa10d076433aba61390ef045b51dbc13c084feab"} Dec 03 14:08:56.443497 master-0 kubenswrapper[4430]: I1203 14:08:56.443228 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"f430efa90aa1a14922254c020cd93a9e38056484cff7a2242fbc1eaaf67809b1"} Dec 03 14:08:56.499631 master-0 kubenswrapper[4430]: I1203 14:08:56.499412 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"df7772326af1c4ac74c7b47c9e456bf0b8dba5b05ac47dd7020f9aef132452b5"} Dec 03 14:08:56.499631 master-0 kubenswrapper[4430]: I1203 14:08:56.499482 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"20b2c40482fb8b14da92e947cd976d0bf6af158ed89328c00286e33619c30dcd"} Dec 03 14:08:56.532642 master-0 kubenswrapper[4430]: I1203 14:08:56.526040 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} Dec 03 14:08:56.539312 master-0 kubenswrapper[4430]: I1203 14:08:56.534510 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"099b48ccbb9a5b34c5ad20c1e3c5d1ce1d5165114a76123c9845243562480fee"} Dec 03 14:08:56.539312 master-0 kubenswrapper[4430]: I1203 14:08:56.534599 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"d8235956ed7349941c7eac761bd95b691b9f7fe8554b29fb4092ebda3d1218ac"} Dec 03 14:08:56.674973 master-0 kubenswrapper[4430]: I1203 14:08:56.674919 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:56.674973 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:56.674973 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:56.674973 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:56.675324 master-0 kubenswrapper[4430]: I1203 14:08:56.674987 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:57.541598 master-0 kubenswrapper[4430]: I1203 14:08:57.540471 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"8cc6882da43023ea8951af93697a69ca8a0382834a729bde95b26bf0519721b7"} Dec 03 14:08:57.549194 master-0 kubenswrapper[4430]: I1203 14:08:57.546885 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"5d83d522f828e748d7d032f3ff1936675a733a53ba0aa7f39e28915f3f2072b9"} Dec 03 14:08:57.549194 master-0 kubenswrapper[4430]: I1203 14:08:57.547252 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"cbde1526ce69020f609a0a787c15c5200eb868e939f05ca55f2dc916654b3758"} Dec 03 14:08:57.561192 master-0 kubenswrapper[4430]: I1203 14:08:57.559743 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"1acd31fad8aa0c159e3b5c0170f410c8a94f96066cf8a52d5b0b8f656f8c7e85"} Dec 03 14:08:57.586471 master-0 kubenswrapper[4430]: I1203 14:08:57.583403 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"02f3223847d2b88cd4713521196b0659169123c9f46e9e1ff34a5fa5e06b9b76"} Dec 03 14:08:57.600819 master-0 kubenswrapper[4430]: I1203 14:08:57.600539 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"2fe126435264098876b7af1b0b1c3cc93f24899e226ee4a6f79667cf9a3e7b3d"} Dec 03 14:08:57.600819 master-0 kubenswrapper[4430]: I1203 14:08:57.600599 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"ac3a0b768a969b7cb830680f65484af95b0c927396ac52266a4bb3bb6fe403d2"} Dec 03 14:08:57.605441 master-0 kubenswrapper[4430]: I1203 14:08:57.604328 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"32f33a8b3c820eff796ad6f9051a5bf68fd94da78a37ae44f9dab8ddbd2cbf58"} Dec 03 14:08:57.609643 master-0 kubenswrapper[4430]: I1203 14:08:57.605718 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"78d92526a2e67c8e8362bda6c2ac81216895107d89915ca11b7c639134bfeb2f"} Dec 03 14:08:57.609643 master-0 kubenswrapper[4430]: I1203 14:08:57.606543 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"f2026d225ee9f397a7cd4fa8edf9af6b016695996bb6647f63e19f94c02cd74e"} Dec 03 14:08:57.609643 master-0 kubenswrapper[4430]: I1203 14:08:57.609243 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"cd364bafef9423e386ca2bc232819cdff09f48923462d61e14b3a33ef4b7ea8c"} Dec 03 14:08:57.623918 master-0 kubenswrapper[4430]: I1203 14:08:57.623858 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="06d9e21dc5026b2ff8ed704174dd5237e899ac63961af3e8259a610d962004eb" exitCode=0 Dec 03 14:08:57.624141 master-0 kubenswrapper[4430]: I1203 14:08:57.623954 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"06d9e21dc5026b2ff8ed704174dd5237e899ac63961af3e8259a610d962004eb"} Dec 03 14:08:57.624141 master-0 kubenswrapper[4430]: I1203 14:08:57.623995 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"d66d8bfaeb58fed7f04b65ebb52929c7756d51c1efb8fd952bccd9549c975a8f"} Dec 03 14:08:57.633591 master-0 kubenswrapper[4430]: I1203 14:08:57.633519 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"695daa3f9b8c987291ee8aef187dbc80140a28edc7922c6980d59a517b9e5408"} Dec 03 14:08:57.633710 master-0 kubenswrapper[4430]: I1203 14:08:57.633599 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"0d61363b278a75b5f85c0c79754ef9b173d6e17325b5732bd81885b87ecb44f3"} Dec 03 14:08:57.640822 master-0 kubenswrapper[4430]: I1203 14:08:57.637683 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} Dec 03 14:08:57.640822 master-0 kubenswrapper[4430]: I1203 14:08:57.637748 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} Dec 03 14:08:57.640822 master-0 kubenswrapper[4430]: I1203 14:08:57.639209 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"e78bc4033a7d11b7ab700626beaee505fa827f457a6adfe8d57093e794239276"} Dec 03 14:08:57.640822 master-0 kubenswrapper[4430]: I1203 14:08:57.640198 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.641297 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"4f0f4699296934aaf4603448dad275a9fe5b6e549e0c4bcacd729188dbd625a0"} Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.641610 4430 patch_prober.go:28] interesting pod/packageserver-7c64dd9d8b-49skr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" start-of-body= Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.641659 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.642562 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"a031cd756eea76671239b5d8b44ca0f768d8d3e2d3ae8479ad05b4a41ff5b210"} Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.646854 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"663e8a37fd419b7f79754aa24f5933c92f81a1598e294f4a7ce88dc057a79131"} Dec 03 14:08:57.648449 master-0 kubenswrapper[4430]: I1203 14:08:57.646929 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"92b77c713ba8122cf7e072422e165fd591a6606d43e2e5cfd00d9c8ba47b7fe1"} Dec 03 14:08:57.657464 master-0 kubenswrapper[4430]: I1203 14:08:57.650227 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"d4c5ef857face5078f064f142c95ef9cf5d5f9501a257d86d44909b69957ac66"} Dec 03 14:08:57.657464 master-0 kubenswrapper[4430]: I1203 14:08:57.652776 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"b30bd814850ac8431d649e2058f0e9745fe182aa039adc2325ddb7467beaac1f"} Dec 03 14:08:57.663955 master-0 kubenswrapper[4430]: I1203 14:08:57.661788 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"9cca1e090f0cd75fde81c657a622e5f5e927ab2567ea1f328de6135b6de36a2b"} Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: I1203 14:08:57.674868 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: I1203 14:08:57.674941 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:57.678208 master-0 kubenswrapper[4430]: I1203 14:08:57.675730 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"e158de6426d7e011009936064f5defb543160cf003d7ed52a883a8507ea64fe3"} Dec 03 14:08:57.724580 master-0 kubenswrapper[4430]: I1203 14:08:57.718053 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"82bfca5befe892227dcb2cc9f2c980331e3b5591d6fe13cb2f4b26d2be585fe1"} Dec 03 14:08:57.754672 master-0 kubenswrapper[4430]: I1203 14:08:57.744034 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"f4124a45c982dd6ad8408966f456c2e401e595751b2120febf21b52aa6853950"} Dec 03 14:08:57.754672 master-0 kubenswrapper[4430]: I1203 14:08:57.744100 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"0b6037f15112a4a3bd7381c6217dac17aca50e9dd2f942e1f9dba1fcd40b259d"} Dec 03 14:08:57.779128 master-0 kubenswrapper[4430]: I1203 14:08:57.776561 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"90f7dd0f59d77bc567528a712e68306607fec16336108bf30a466d507dcf3251"} Dec 03 14:08:57.782429 master-0 kubenswrapper[4430]: I1203 14:08:57.780252 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"9646ac4322e8943159750518610000e0604a39fc44ae3483c107c5ffb2c49968"} Dec 03 14:08:57.789433 master-0 kubenswrapper[4430]: I1203 14:08:57.783571 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"2fcfe31c753b2c3a424a2fef60193cdda9cfa653d7d5e18e02e63358fb89fd73"} Dec 03 14:08:57.789433 master-0 kubenswrapper[4430]: I1203 14:08:57.785650 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"a164e5dcdfb95eb9ebdef65c8f74ccecfa67a5989c4de3033753fd7ca8939551"} Dec 03 14:08:57.808326 master-0 kubenswrapper[4430]: I1203 14:08:57.798745 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"8b009fd8a9bad6827f986727dcda599e0b5809b316a6de90d75d38b79726932c"} Dec 03 14:08:57.808326 master-0 kubenswrapper[4430]: I1203 14:08:57.798792 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"f6b0a5f597fc99a1dff83a421933bf55dc9808dd51bfe542c807a86b6e6d7922"} Dec 03 14:08:57.815435 master-0 kubenswrapper[4430]: W1203 14:08:57.814260 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod911f6333_cdb0_425c_b79b_f892444b7097.slice/crio-6adda2cd66212efbe15a6f9bedc6edb9c1563f032c39d76fd6e49c342592e2a0 WatchSource:0}: Error finding container 6adda2cd66212efbe15a6f9bedc6edb9c1563f032c39d76fd6e49c342592e2a0: Status 404 returned error can't find the container with id 6adda2cd66212efbe15a6f9bedc6edb9c1563f032c39d76fd6e49c342592e2a0 Dec 03 14:08:57.889809 master-0 kubenswrapper[4430]: W1203 14:08:57.881335 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcc78129_4a81_410e_9a42_b12043b5a75a.slice/crio-e009e0c925f1ea7abc1acfd0b4378e0e40a901b0b3a666ce25ed9367ce31e958 WatchSource:0}: Error finding container e009e0c925f1ea7abc1acfd0b4378e0e40a901b0b3a666ce25ed9367ce31e958: Status 404 returned error can't find the container with id e009e0c925f1ea7abc1acfd0b4378e0e40a901b0b3a666ce25ed9367ce31e958 Dec 03 14:08:58.102678 master-0 kubenswrapper[4430]: W1203 14:08:58.102164 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52100521_67e9_40c9_887c_eda6560f06e0.slice/crio-413175e7e49da93ead24281991c103d075ab45d5d92d0dbd913716c1238afeff WatchSource:0}: Error finding container 413175e7e49da93ead24281991c103d075ab45d5d92d0dbd913716c1238afeff: Status 404 returned error can't find the container with id 413175e7e49da93ead24281991c103d075ab45d5d92d0dbd913716c1238afeff Dec 03 14:08:58.122237 master-0 kubenswrapper[4430]: W1203 14:08:58.118961 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3200abb_a440_44db_8897_79c809c1d838.slice/crio-7bf70af3f09db2d96a85682f64e57f15fe5e480a5dd1cf6c25d74898602d8539 WatchSource:0}: Error finding container 7bf70af3f09db2d96a85682f64e57f15fe5e480a5dd1cf6c25d74898602d8539: Status 404 returned error can't find the container with id 7bf70af3f09db2d96a85682f64e57f15fe5e480a5dd1cf6c25d74898602d8539 Dec 03 14:08:58.677715 master-0 kubenswrapper[4430]: I1203 14:08:58.677664 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:58.677715 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:58.677715 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:58.677715 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:58.680672 master-0 kubenswrapper[4430]: I1203 14:08:58.677734 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:08:58.899636 master-0 kubenswrapper[4430]: I1203 14:08:58.895709 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"39932b4293aed1db2841e79e74a86e35a0ad31c316d3a89f0383cc16c454c0e6"} Dec 03 14:08:58.899952 master-0 kubenswrapper[4430]: I1203 14:08:58.899359 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"f8ecd06df66737c4bcd694892248e7f177dab6ea4202ec1a2a53df310ad86fb6"} Dec 03 14:08:58.908401 master-0 kubenswrapper[4430]: I1203 14:08:58.902524 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"ab76a9a8e06025c1dec72e891ccadecf9327b11d180768dbf27a99508a221390"} Dec 03 14:08:58.915706 master-0 kubenswrapper[4430]: I1203 14:08:58.912983 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"85dd8b8753806852189f415c9220b95a22cfb5ce8b7ab624fa84629d3cdc12db"} Dec 03 14:08:58.923557 master-0 kubenswrapper[4430]: I1203 14:08:58.922024 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"1e0f5c758fef44a1cf56d65a61567af61aa3dbbf05960724b8800db96778ab2e"} Dec 03 14:08:58.934675 master-0 kubenswrapper[4430]: I1203 14:08:58.933396 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"6e7b2d866bacc29c710bc8ce7afa7027ba2c2cc52fe1683cdedd96674d20b68e"} Dec 03 14:08:58.944052 master-0 kubenswrapper[4430]: I1203 14:08:58.943970 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"a57e3fd516c366f59994afcbac00470eef47d782a45beaa4157336e3867253a5"} Dec 03 14:08:58.952181 master-0 kubenswrapper[4430]: I1203 14:08:58.952107 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"868b4cdc8a4fe91b5ca34a18b1d879aa41665f52be1f78b8a23f6bad9d2f2106"} Dec 03 14:08:58.955033 master-0 kubenswrapper[4430]: I1203 14:08:58.953614 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:08:58.962700 master-0 kubenswrapper[4430]: I1203 14:08:58.959707 4430 patch_prober.go:28] interesting pod/marketplace-operator-7d67745bb7-dwcxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Dec 03 14:08:58.962700 master-0 kubenswrapper[4430]: I1203 14:08:58.959798 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Dec 03 14:08:58.962700 master-0 kubenswrapper[4430]: I1203 14:08:58.962269 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"9cb970a0ff4f56774de29ca6c880effdf313b3b85e4caf3d9d771902b809383e"} Dec 03 14:08:58.962700 master-0 kubenswrapper[4430]: I1203 14:08:58.962654 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:08:58.989371 master-0 kubenswrapper[4430]: I1203 14:08:58.989295 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"61676d8f0839837a58e20e6ebffdec229bf6b09a6cc8735b75e6eb5a7df3a05f"} Dec 03 14:08:58.993940 master-0 kubenswrapper[4430]: I1203 14:08:58.993841 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"3dcb8c81f958205d8159d6f4a866048522077939bb988f60fe224e9ff76508ba"} Dec 03 14:08:58.995836 master-0 kubenswrapper[4430]: I1203 14:08:58.995465 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"d790355d08b1339b9ab6357912ac5eed077a082b3154f01ad72fb83405dcf878"} Dec 03 14:08:59.000590 master-0 kubenswrapper[4430]: I1203 14:08:58.999074 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerStarted","Data":"7bf70af3f09db2d96a85682f64e57f15fe5e480a5dd1cf6c25d74898602d8539"} Dec 03 14:08:59.007778 master-0 kubenswrapper[4430]: I1203 14:08:59.007723 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"46872b6388f4e7de23159cbce28c7d7790585a0246ce8446e98add9bf9847a50"} Dec 03 14:08:59.010642 master-0 kubenswrapper[4430]: I1203 14:08:59.010608 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"13ec91218c42c17f80b09f6d4e58fe81786414c4beb9f8fe61438b1a78ffa10e"} Dec 03 14:08:59.013503 master-0 kubenswrapper[4430]: I1203 14:08:59.013471 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"4be1b0de8eba142864a9554f036f3ebbec16acc2f0115a4dd6cf583de00fb448"} Dec 03 14:08:59.015492 master-0 kubenswrapper[4430]: I1203 14:08:59.015409 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"29de5d034eb20e297b09f00601eab87cc7d2338b3c47bde853e73be6d5f069c3"} Dec 03 14:08:59.017068 master-0 kubenswrapper[4430]: I1203 14:08:59.016890 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"7157ee7ceb47a456ad30e8fc68777a219bc615840186917e36d69ff4cf662124"} Dec 03 14:08:59.023814 master-0 kubenswrapper[4430]: I1203 14:08:59.023591 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"e009e0c925f1ea7abc1acfd0b4378e0e40a901b0b3a666ce25ed9367ce31e958"} Dec 03 14:08:59.028522 master-0 kubenswrapper[4430]: I1203 14:08:59.028479 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} Dec 03 14:08:59.033123 master-0 kubenswrapper[4430]: I1203 14:08:59.033085 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"09ffab914d09884b4983c574c6d24d6a7e9040ec165ece1b33577da22be5809d"} Dec 03 14:08:59.033881 master-0 kubenswrapper[4430]: I1203 14:08:59.033848 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"bcaa61cdf3c5a885d6d7d35f36c3bec8ed050a6239e18ebac23ac94a259b213c"} Dec 03 14:08:59.035192 master-0 kubenswrapper[4430]: I1203 14:08:59.035167 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"8295a91b861fd14a5918e09af18176a9b2d32759ced127e24167a11bfe9a8280"} Dec 03 14:08:59.036371 master-0 kubenswrapper[4430]: I1203 14:08:59.036315 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"a7310a4cebf4095df01633773bdd5d5b998afb06cbfea74d5add579c941dfa85"} Dec 03 14:08:59.055768 master-0 kubenswrapper[4430]: I1203 14:08:59.054828 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerStarted","Data":"12d69da055ffcd27322e3a8adecbefb36c38d1067abe73e04477a138ccba0aa7"} Dec 03 14:08:59.072820 master-0 kubenswrapper[4430]: I1203 14:08:59.072675 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"cc37cc52d5952dec04392be7f1d07358f44ea3b892da63f7499aef374fae7b9f"} Dec 03 14:08:59.093953 master-0 kubenswrapper[4430]: I1203 14:08:59.093521 4430 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="46be62d73d1228ef53a5783221ce918be2488925b62d613c6753d060e8066fca" exitCode=0 Dec 03 14:08:59.094850 master-0 kubenswrapper[4430]: I1203 14:08:59.094758 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"46be62d73d1228ef53a5783221ce918be2488925b62d613c6753d060e8066fca"} Dec 03 14:08:59.104146 master-0 kubenswrapper[4430]: I1203 14:08:59.104080 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"ad07beafbe13828c52b30808df988ac81e461152f9bfedcb00ac9023a7397a93"} Dec 03 14:08:59.109376 master-0 kubenswrapper[4430]: I1203 14:08:59.109310 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"4a2b8028d7c2b5e304e7ce9eca67ce37ed97bbf7fef945eae54a284c26d8dd52"} Dec 03 14:08:59.110615 master-0 kubenswrapper[4430]: I1203 14:08:59.110398 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"63d27d0f91239a62847325cacffe6166f5b9346592b0bb98d96c86cfbda4a825"} Dec 03 14:08:59.111806 master-0 kubenswrapper[4430]: I1203 14:08:59.111602 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:08:59.116824 master-0 kubenswrapper[4430]: I1203 14:08:59.116768 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:08:59.116985 master-0 kubenswrapper[4430]: I1203 14:08:59.116930 4430 patch_prober.go:28] interesting pod/olm-operator-76bd5d69c7-fjrrg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Dec 03 14:08:59.117043 master-0 kubenswrapper[4430]: I1203 14:08:59.116997 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Dec 03 14:08:59.118853 master-0 kubenswrapper[4430]: I1203 14:08:59.118807 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"3f589f3ae027c1368f45f584fae55e35ec5b4a3034fcc4db44a5980b9e54a0f8"} Dec 03 14:08:59.126323 master-0 kubenswrapper[4430]: I1203 14:08:59.126274 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"40fb59cfd74e6041170c92f1ee6dc799a6c5d221e4d7a05d6614563d67bc4a19"} Dec 03 14:08:59.128589 master-0 kubenswrapper[4430]: I1203 14:08:59.128557 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:59.137241 master-0 kubenswrapper[4430]: I1203 14:08:59.137175 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"413175e7e49da93ead24281991c103d075ab45d5d92d0dbd913716c1238afeff"} Dec 03 14:08:59.150038 master-0 kubenswrapper[4430]: I1203 14:08:59.149908 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:08:59.153134 master-0 kubenswrapper[4430]: I1203 14:08:59.153077 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"43abb71341ec0a1dc2f76343a6ff111932cdd76ae5d3db47c37a213cf88b3c55"} Dec 03 14:08:59.178325 master-0 kubenswrapper[4430]: I1203 14:08:59.178272 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"11a7a2372ed4ed275b5eb5731031a4209866747aae175ba651b50cc3a8dcc0fc"} Dec 03 14:08:59.185107 master-0 kubenswrapper[4430]: I1203 14:08:59.185056 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"b57a70d2e2776e5b5db702e622404000da5f21cbc54df8ab412e26287d1d43bb"} Dec 03 14:08:59.201104 master-0 kubenswrapper[4430]: I1203 14:08:59.201007 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"30b76d074acf678f7d7d24ad38910415aec45982b7f54f5d70db449af45edb46"} Dec 03 14:08:59.206035 master-0 kubenswrapper[4430]: I1203 14:08:59.205963 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerStarted","Data":"3b82db4ee3affa2b67ea7317c7e6856037ddf6fe7eac4d19fdf2d717e76f0218"} Dec 03 14:08:59.216510 master-0 kubenswrapper[4430]: I1203 14:08:59.213397 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"4361f65ecbbdef8fe37d72db037eebd04f7f5be97175ebb828188ec8b2a32ed5"} Dec 03 14:08:59.221843 master-0 kubenswrapper[4430]: I1203 14:08:59.221793 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerStarted","Data":"00217725de46a89eb8db92c2bd64b28b001a01546be542f6e8c5a39e356e0798"} Dec 03 14:08:59.225378 master-0 kubenswrapper[4430]: I1203 14:08:59.225343 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"4a52998d3e2c831cd3e0252ad2b4031d2564fefaca6ffb1a4074fbc3cf15e530"} Dec 03 14:08:59.229193 master-0 kubenswrapper[4430]: I1203 14:08:59.226674 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"107170fc667056d06904f801835e625e80d47800aa6b7e4837aeabb3fa53c3e5"} Dec 03 14:08:59.229193 master-0 kubenswrapper[4430]: I1203 14:08:59.229145 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"c123b23ebe0056270214fb7a2c5e5f02c8177eacf32ad97b9e51ac2d145232fa"} Dec 03 14:08:59.238879 master-0 kubenswrapper[4430]: I1203 14:08:59.237846 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"6adda2cd66212efbe15a6f9bedc6edb9c1563f032c39d76fd6e49c342592e2a0"} Dec 03 14:08:59.261134 master-0 kubenswrapper[4430]: I1203 14:08:59.257533 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"6b14e5bc5b1bf3a345fc43487341e7bb2879f457a40a92cd429dc7444ddd6aa7"} Dec 03 14:08:59.333566 master-0 kubenswrapper[4430]: I1203 14:08:59.333434 4430 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:08:59.675759 master-0 kubenswrapper[4430]: I1203 14:08:59.675699 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:08:59.675759 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:08:59.675759 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:08:59.675759 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:08:59.676046 master-0 kubenswrapper[4430]: I1203 14:08:59.675793 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:00.265247 master-0 kubenswrapper[4430]: I1203 14:09:00.265170 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"94b1e51cb64ed3fd33e9d2d28cae44b8739f1a59eadd2afa2ef47ab28953c6ae"} Dec 03 14:09:00.267012 master-0 kubenswrapper[4430]: I1203 14:09:00.266971 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"664c3ff2947feb1a70318c5ff0027e09769155ce2375556704dbb4dae528edde"} Dec 03 14:09:00.268372 master-0 kubenswrapper[4430]: I1203 14:09:00.267948 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"ee5fbbdb28f502e43e6a512cbbf957778f369fe882b43acd145d347e95dbf4df"} Dec 03 14:09:00.268372 master-0 kubenswrapper[4430]: I1203 14:09:00.268109 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:09:00.269925 master-0 kubenswrapper[4430]: I1203 14:09:00.269852 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"2b8e810972222c8e8af4262192b9de6b2880ece9c7675e0e85c1ef0fb73b69e3"} Dec 03 14:09:00.273043 master-0 kubenswrapper[4430]: I1203 14:09:00.272794 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:09:00.273387 master-0 kubenswrapper[4430]: I1203 14:09:00.273244 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"85b3b387a530aa9d604f58b1466a82ed6119e4fd14759d50e69373a700eb6343"} Dec 03 14:09:00.276295 master-0 kubenswrapper[4430]: I1203 14:09:00.276240 4430 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="5d13d981728eca417e8af84ce65bff50dd0e00356f143352f8595b318e33c624" exitCode=0 Dec 03 14:09:00.276399 master-0 kubenswrapper[4430]: I1203 14:09:00.276328 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"5d13d981728eca417e8af84ce65bff50dd0e00356f143352f8595b318e33c624"} Dec 03 14:09:00.279065 master-0 kubenswrapper[4430]: I1203 14:09:00.279017 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"44353bbb262f85a0b0d8143ae789fbdfa6f75ad1779eb1521fb956532d8d3c62"} Dec 03 14:09:00.280816 master-0 kubenswrapper[4430]: I1203 14:09:00.280746 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerStarted","Data":"c01edad1db506ce1a440eec485368dc53175e475c8c14d77a9938e14bf9c40c8"} Dec 03 14:09:00.282256 master-0 kubenswrapper[4430]: I1203 14:09:00.282219 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"cd2b922ff000e023561fd4d7288f18b8e8f2cf57cb7e245f6f79c898b877df6c"} Dec 03 14:09:00.286093 master-0 kubenswrapper[4430]: I1203 14:09:00.284465 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"8e0205c0280d42ee887f4cb1dfaec0eef8c08f36dae8698aa542cc3955e73b63"} Dec 03 14:09:00.287851 master-0 kubenswrapper[4430]: I1203 14:09:00.287805 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:09:00.288054 master-0 kubenswrapper[4430]: I1203 14:09:00.288030 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:09:00.673875 master-0 kubenswrapper[4430]: I1203 14:09:00.673799 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:00.673875 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:00.673875 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:00.673875 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:00.674221 master-0 kubenswrapper[4430]: I1203 14:09:00.673881 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:01.304337 master-0 kubenswrapper[4430]: I1203 14:09:01.303476 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"bd9bf1d9a40db0a69c4725af8b9a8194c4d981c131850180aa86d88426085453"} Dec 03 14:09:01.306274 master-0 kubenswrapper[4430]: I1203 14:09:01.305031 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"030f7dd0dec22bf96156b22c215fe1cbc7bc1825867377f5fd37e9f774beb6ec"} Dec 03 14:09:01.308541 master-0 kubenswrapper[4430]: I1203 14:09:01.307469 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"c127d86f25382d0f34fa623953455aa5a7250f26a5d55cdc092b594a25f7b6f6"} Dec 03 14:09:01.309125 master-0 kubenswrapper[4430]: I1203 14:09:01.308913 4430 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="279e318479b1ee7b00fee741220614115be8aead9caeb2652e7fd65fbb759aeb" exitCode=0 Dec 03 14:09:01.309262 master-0 kubenswrapper[4430]: I1203 14:09:01.309233 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerDied","Data":"279e318479b1ee7b00fee741220614115be8aead9caeb2652e7fd65fbb759aeb"} Dec 03 14:09:01.310751 master-0 kubenswrapper[4430]: I1203 14:09:01.310720 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerStarted","Data":"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396"} Dec 03 14:09:01.314622 master-0 kubenswrapper[4430]: I1203 14:09:01.311682 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:09:01.314622 master-0 kubenswrapper[4430]: I1203 14:09:01.313853 4430 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="65d131ce0a571c5ee9e932d3ab473c5833086d299bd47234a53fccf5a581f619" exitCode=0 Dec 03 14:09:01.314622 master-0 kubenswrapper[4430]: I1203 14:09:01.313930 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"65d131ce0a571c5ee9e932d3ab473c5833086d299bd47234a53fccf5a581f619"} Dec 03 14:09:01.317315 master-0 kubenswrapper[4430]: I1203 14:09:01.316992 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerStarted","Data":"113fc17037ed6814061d8e6003d126d84ffb64ce5d368f93c8fa094292f35bc6"} Dec 03 14:09:01.319971 master-0 kubenswrapper[4430]: I1203 14:09:01.319919 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"18c5a85efe83dfd9e427d45d6180c81438ff37c2286c08cbb7d19c4a3ea3360e"} Dec 03 14:09:01.321381 master-0 kubenswrapper[4430]: I1203 14:09:01.321339 4430 generic.go:334] "Generic (PLEG): container finished" podID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerID="24cc9b7f18bd864475b669ca163ed8a3f5f2b7077812b02895fb154dcb27511e" exitCode=0 Dec 03 14:09:01.321471 master-0 kubenswrapper[4430]: I1203 14:09:01.321401 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerDied","Data":"24cc9b7f18bd864475b669ca163ed8a3f5f2b7077812b02895fb154dcb27511e"} Dec 03 14:09:01.323349 master-0 kubenswrapper[4430]: I1203 14:09:01.323011 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"c6a1bd456199320e9417bea0616af7659530d807a4c92a44ce75df1bc574d0ea"} Dec 03 14:09:01.327656 master-0 kubenswrapper[4430]: I1203 14:09:01.325065 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"17a152078d339564536fe253b0f7f334b94316a6596990d893962b36dd0fb828"} Dec 03 14:09:01.328005 master-0 kubenswrapper[4430]: I1203 14:09:01.327966 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"268a80f5afa3735e3b2ff1f7c9d91828f76700e580dcfe3a093ed071f55a17f1"} Dec 03 14:09:01.330941 master-0 kubenswrapper[4430]: I1203 14:09:01.330899 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"ac19b8941583f505bca7f19ba1737738de3c95824b818d9b1b242be3535ba095"} Dec 03 14:09:01.333986 master-0 kubenswrapper[4430]: I1203 14:09:01.333953 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} Dec 03 14:09:01.336391 master-0 kubenswrapper[4430]: I1203 14:09:01.336359 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"3c4cf7436dc8e91930ad1164807bc6480fdcf36da20d7e99eac8c577daf73011"} Dec 03 14:09:01.337977 master-0 kubenswrapper[4430]: I1203 14:09:01.337946 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"66b01e50689fc76d3d521545e6f121110eaac09350e8b3c349449c13b1cce1b9"} Dec 03 14:09:01.339796 master-0 kubenswrapper[4430]: I1203 14:09:01.339765 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"6f164a6df1bde5320ab22b53ded4e042f338dc219f6766493bf70ff678182ddd"} Dec 03 14:09:01.340916 master-0 kubenswrapper[4430]: I1203 14:09:01.340884 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerStarted","Data":"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04"} Dec 03 14:09:01.342331 master-0 kubenswrapper[4430]: I1203 14:09:01.342283 4430 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="cb245fdd1ded35f84a017fb3d05c897d11b99a2c87637a36de1e92f40b4bd328" exitCode=0 Dec 03 14:09:01.342430 master-0 kubenswrapper[4430]: I1203 14:09:01.342346 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerDied","Data":"cb245fdd1ded35f84a017fb3d05c897d11b99a2c87637a36de1e92f40b4bd328"} Dec 03 14:09:01.344327 master-0 kubenswrapper[4430]: I1203 14:09:01.344283 4430 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="752bb6217aaabca9fa874d8ea8c62c092824a0c96aafd3fa82366336e5c03067" exitCode=0 Dec 03 14:09:01.344410 master-0 kubenswrapper[4430]: I1203 14:09:01.344380 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"752bb6217aaabca9fa874d8ea8c62c092824a0c96aafd3fa82366336e5c03067"} Dec 03 14:09:01.346287 master-0 kubenswrapper[4430]: I1203 14:09:01.346141 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"1a40e7e57d434d04ea76ffb0ba1c24f68234d64b2f2b2242f23b072cbf5c5552"} Dec 03 14:09:01.349130 master-0 kubenswrapper[4430]: I1203 14:09:01.349092 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"f1e23675313ea31d0d7ab7f575c940a2fa52cf5f2c1440a5117ca457764a1ec2"} Dec 03 14:09:01.351664 master-0 kubenswrapper[4430]: I1203 14:09:01.350715 4430 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="026c17d1392210c757f5fe7c5483f5c93b351cb341b04fe3e4cb9778d5d87cc7" exitCode=0 Dec 03 14:09:01.351664 master-0 kubenswrapper[4430]: I1203 14:09:01.350774 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"026c17d1392210c757f5fe7c5483f5c93b351cb341b04fe3e4cb9778d5d87cc7"} Dec 03 14:09:01.357679 master-0 kubenswrapper[4430]: I1203 14:09:01.357618 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"c809768d9f5c1be51c102ade94741e729d18869874debc05a750c4f2d9789d3d"} Dec 03 14:09:01.358808 master-0 kubenswrapper[4430]: I1203 14:09:01.358775 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:09:01.359979 master-0 kubenswrapper[4430]: I1203 14:09:01.359945 4430 patch_prober.go:28] interesting pod/console-operator-77df56447c-vsrxx container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.75:8443/readyz\": dial tcp 10.128.0.75:8443: connect: connection refused" start-of-body= Dec 03 14:09:01.360055 master-0 kubenswrapper[4430]: I1203 14:09:01.359988 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.75:8443/readyz\": dial tcp 10.128.0.75:8443: connect: connection refused" Dec 03 14:09:01.362099 master-0 kubenswrapper[4430]: I1203 14:09:01.362048 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"f702f47197a7be997d18ff5a17914c0f7a106fc6c0ef420b592e9470e20aa846"} Dec 03 14:09:01.511819 master-0 kubenswrapper[4430]: I1203 14:09:01.511722 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:09:01.678126 master-0 kubenswrapper[4430]: I1203 14:09:01.678067 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:01.678126 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:01.678126 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:01.678126 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:01.678496 master-0 kubenswrapper[4430]: I1203 14:09:01.678157 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:02.404583 master-0 kubenswrapper[4430]: I1203 14:09:02.403817 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"defde12eea6161668211d878a862d3062187d5a0fd68e127f046adf8e6fb307b"} Dec 03 14:09:02.406308 master-0 kubenswrapper[4430]: I1203 14:09:02.406277 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:09:02.452317 master-0 kubenswrapper[4430]: I1203 14:09:02.452272 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"a9e6448c1ed22d7af273c868f2b82ebb2cd877ea3652f571176e7c3960d01c77"} Dec 03 14:09:02.463081 master-0 kubenswrapper[4430]: I1203 14:09:02.460948 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"fcca5b73de79e0a5e92be5518e4eac372c0dbac239f33776e14037dde54264ca"} Dec 03 14:09:02.477045 master-0 kubenswrapper[4430]: I1203 14:09:02.476635 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerStarted","Data":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} Dec 03 14:09:02.484897 master-0 kubenswrapper[4430]: I1203 14:09:02.478947 4430 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="c8bcc12326705f9fd39e7663f0425d1e7f3428864a7c7afe665a9e3bc134843f" exitCode=0 Dec 03 14:09:02.484897 master-0 kubenswrapper[4430]: I1203 14:09:02.478997 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"c8bcc12326705f9fd39e7663f0425d1e7f3428864a7c7afe665a9e3bc134843f"} Dec 03 14:09:02.496216 master-0 kubenswrapper[4430]: I1203 14:09:02.496120 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"7478ef4361a9a8412f63ccae624bfe30f6d3fc0665bf96d1e52d3e33b8313db0"} Dec 03 14:09:02.502357 master-0 kubenswrapper[4430]: I1203 14:09:02.497689 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"9e94732fe26d75a2d01265c881c4755632147def23bbe0323869d6c52a6c2b44"} Dec 03 14:09:02.504970 master-0 kubenswrapper[4430]: I1203 14:09:02.502714 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:09:02.532049 master-0 kubenswrapper[4430]: I1203 14:09:02.530879 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"54b6140cbea417161375e860fb10d603f09badb0c109ed0bc2f7bc20067a5846"} Dec 03 14:09:02.532049 master-0 kubenswrapper[4430]: I1203 14:09:02.530944 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:09:02.532049 master-0 kubenswrapper[4430]: I1203 14:09:02.530965 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:09:02.532049 master-0 kubenswrapper[4430]: I1203 14:09:02.531311 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:09:02.535104 master-0 kubenswrapper[4430]: I1203 14:09:02.534751 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:09:02.535104 master-0 kubenswrapper[4430]: I1203 14:09:02.534814 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:09:02.540413 master-0 kubenswrapper[4430]: I1203 14:09:02.540368 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:09:02.658916 master-0 kubenswrapper[4430]: I1203 14:09:02.658861 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:09:02.675846 master-0 kubenswrapper[4430]: I1203 14:09:02.675722 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:02.675846 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:02.675846 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:02.675846 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:02.675846 master-0 kubenswrapper[4430]: I1203 14:09:02.675789 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:03.124746 master-0 kubenswrapper[4430]: I1203 14:09:03.124614 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:09:03.542039 master-0 kubenswrapper[4430]: I1203 14:09:03.537631 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:09:03.542039 master-0 kubenswrapper[4430]: I1203 14:09:03.537697 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:09:03.542039 master-0 kubenswrapper[4430]: I1203 14:09:03.538071 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:09:03.542039 master-0 kubenswrapper[4430]: I1203 14:09:03.538096 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:09:03.625507 master-0 kubenswrapper[4430]: I1203 14:09:03.624545 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"54cdc243b0c27d1e24de30261008992ee2e9ac5df14374b004cae38b4519ba87"} Dec 03 14:09:03.640871 master-0 kubenswrapper[4430]: I1203 14:09:03.640795 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"27cdbfb2504c37ed6726323d5485c0e2e22b89f61a186620fdadabff753518bf"} Dec 03 14:09:03.640871 master-0 kubenswrapper[4430]: I1203 14:09:03.640866 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"a30f66b778c323b468e53a0d452494721896105089b8f0bde3f60b49ba83e072"} Dec 03 14:09:03.648077 master-0 kubenswrapper[4430]: I1203 14:09:03.647266 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"7283e9c3ac39e31379bde2bcb4f5c88ca8ac8b17f2a1ee4bbb4c0394215cba1e"} Dec 03 14:09:03.648077 master-0 kubenswrapper[4430]: I1203 14:09:03.647314 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"f5133cd1fc1a98bb373d1f7da6b11b62cd31e7626fda75c20a4a8824a0569c80"} Dec 03 14:09:03.674196 master-0 kubenswrapper[4430]: I1203 14:09:03.674079 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:03.674196 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:03.674196 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:03.674196 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:03.675076 master-0 kubenswrapper[4430]: I1203 14:09:03.674904 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:03.675241 master-0 kubenswrapper[4430]: I1203 14:09:03.674476 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"4d22eaf8a4d68eaf0bfda914c7a1d7972816856a46c952cb9e9540f4708a3d85"} Dec 03 14:09:03.693595 master-0 kubenswrapper[4430]: I1203 14:09:03.693530 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"2852cd9924206706ff40dbbdc8c0f832d0ce9d4d398a466915f37d7045374fe4"} Dec 03 14:09:03.707871 master-0 kubenswrapper[4430]: I1203 14:09:03.707813 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"0ccfb76dcfef4414554f96e7092cd92bace5d1ec302afacff31afb3e00b45190"} Dec 03 14:09:03.734311 master-0 kubenswrapper[4430]: I1203 14:09:03.733649 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"a57f3a556881f57b28aa81a1b7f6bbd44306b4874d2df36ed0f0bc67eb848026"} Dec 03 14:09:03.734711 master-0 kubenswrapper[4430]: I1203 14:09:03.734684 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:09:03.752440 master-0 kubenswrapper[4430]: I1203 14:09:03.751210 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"7ea7993bbae9be8c1951415ad4b0341048ac89f53e9f50b85cb8ddd57e46b492"} Dec 03 14:09:03.762942 master-0 kubenswrapper[4430]: I1203 14:09:03.762900 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"1813e95aa9179f1c5292e5b2348000ccebe08276790c97d9ea0ab42ce9345f9c"} Dec 03 14:09:03.789758 master-0 kubenswrapper[4430]: I1203 14:09:03.784228 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"4ea3be14129d747d3319cc01eb1e20f1c667535f6d7f6f3f37fa56ddded71556"} Dec 03 14:09:03.790184 master-0 kubenswrapper[4430]: I1203 14:09:03.789719 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:09:03.790356 master-0 kubenswrapper[4430]: I1203 14:09:03.790323 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:09:03.790787 master-0 kubenswrapper[4430]: I1203 14:09:03.790768 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:09:03.803471 master-0 kubenswrapper[4430]: I1203 14:09:03.803137 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:09:04.609855 master-0 kubenswrapper[4430]: I1203 14:09:04.609727 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:09:04.888538 master-0 kubenswrapper[4430]: I1203 14:09:04.888208 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:04.888538 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:04.888538 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:04.888538 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:04.888538 master-0 kubenswrapper[4430]: I1203 14:09:04.888285 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:04.897721 master-0 kubenswrapper[4430]: I1203 14:09:04.897651 4430 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="4d22eaf8a4d68eaf0bfda914c7a1d7972816856a46c952cb9e9540f4708a3d85" exitCode=0 Dec 03 14:09:04.897909 master-0 kubenswrapper[4430]: I1203 14:09:04.897745 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"4d22eaf8a4d68eaf0bfda914c7a1d7972816856a46c952cb9e9540f4708a3d85"} Dec 03 14:09:04.902855 master-0 kubenswrapper[4430]: I1203 14:09:04.902791 4430 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="2852cd9924206706ff40dbbdc8c0f832d0ce9d4d398a466915f37d7045374fe4" exitCode=0 Dec 03 14:09:04.902926 master-0 kubenswrapper[4430]: I1203 14:09:04.902892 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"2852cd9924206706ff40dbbdc8c0f832d0ce9d4d398a466915f37d7045374fe4"} Dec 03 14:09:04.905363 master-0 kubenswrapper[4430]: I1203 14:09:04.905316 4430 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="54cdc243b0c27d1e24de30261008992ee2e9ac5df14374b004cae38b4519ba87" exitCode=0 Dec 03 14:09:04.905453 master-0 kubenswrapper[4430]: I1203 14:09:04.905399 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"54cdc243b0c27d1e24de30261008992ee2e9ac5df14374b004cae38b4519ba87"} Dec 03 14:09:04.912037 master-0 kubenswrapper[4430]: I1203 14:09:04.911922 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerStarted","Data":"233eca8d43479e016f98704c0539e9fe320cd7a7c4ee637c4f56e040b2892a72"} Dec 03 14:09:04.920558 master-0 kubenswrapper[4430]: I1203 14:09:04.920508 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:09:05.100787 master-0 kubenswrapper[4430]: I1203 14:09:05.100744 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:09:05.101066 master-0 kubenswrapper[4430]: I1203 14:09:05.101050 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:09:05.102636 master-0 kubenswrapper[4430]: I1203 14:09:05.102559 4430 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:09:05.102807 master-0 kubenswrapper[4430]: I1203 14:09:05.102712 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:09:05.118292 master-0 kubenswrapper[4430]: I1203 14:09:05.118239 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:09:05.118795 master-0 kubenswrapper[4430]: I1203 14:09:05.118758 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:09:05.123160 master-0 kubenswrapper[4430]: I1203 14:09:05.123125 4430 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:09:05.123243 master-0 kubenswrapper[4430]: I1203 14:09:05.123188 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:09:05.124957 master-0 kubenswrapper[4430]: I1203 14:09:05.124911 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:09:05.160122 master-0 kubenswrapper[4430]: I1203 14:09:05.159988 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:09:05.160529 master-0 kubenswrapper[4430]: I1203 14:09:05.160480 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:09:05.199122 master-0 kubenswrapper[4430]: I1203 14:09:05.199057 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:09:05.560859 master-0 kubenswrapper[4430]: I1203 14:09:05.560754 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:09:05.612662 master-0 kubenswrapper[4430]: I1203 14:09:05.612334 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:09:05.674276 master-0 kubenswrapper[4430]: I1203 14:09:05.674207 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:05.674276 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:05.674276 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:05.674276 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:05.674646 master-0 kubenswrapper[4430]: I1203 14:09:05.674280 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:05.927669 master-0 kubenswrapper[4430]: I1203 14:09:05.927332 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:09:05.958170 master-0 kubenswrapper[4430]: I1203 14:09:05.958115 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:09:06.675317 master-0 kubenswrapper[4430]: I1203 14:09:06.675169 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:06.675317 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:06.675317 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:06.675317 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:06.676159 master-0 kubenswrapper[4430]: I1203 14:09:06.675322 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:07.674553 master-0 kubenswrapper[4430]: I1203 14:09:07.674473 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:07.674553 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:07.674553 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:07.674553 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:07.674922 master-0 kubenswrapper[4430]: I1203 14:09:07.674586 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:07.955089 master-0 kubenswrapper[4430]: I1203 14:09:07.954948 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503"} Dec 03 14:09:07.958140 master-0 kubenswrapper[4430]: I1203 14:09:07.958094 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"343824e3f1b00694b3e683568986ea8999f4a2938d0db2b6054009935de2fe35"} Dec 03 14:09:07.968982 master-0 kubenswrapper[4430]: I1203 14:09:07.968493 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e"} Dec 03 14:09:08.514431 master-0 kubenswrapper[4430]: I1203 14:09:08.514352 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:09:08.514689 master-0 kubenswrapper[4430]: I1203 14:09:08.514458 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:09:08.524219 master-0 kubenswrapper[4430]: I1203 14:09:08.524162 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:09:08.673448 master-0 kubenswrapper[4430]: I1203 14:09:08.673384 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:08.673448 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:08.673448 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:08.673448 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:08.673748 master-0 kubenswrapper[4430]: I1203 14:09:08.673464 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:08.980188 master-0 kubenswrapper[4430]: I1203 14:09:08.980120 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:09:09.676288 master-0 kubenswrapper[4430]: I1203 14:09:09.674218 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:09.676288 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:09.676288 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:09.676288 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:09.676288 master-0 kubenswrapper[4430]: I1203 14:09:09.674292 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:10.674068 master-0 kubenswrapper[4430]: I1203 14:09:10.673975 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:10.674068 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:10.674068 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:10.674068 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:10.674068 master-0 kubenswrapper[4430]: I1203 14:09:10.674050 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:11.674011 master-0 kubenswrapper[4430]: I1203 14:09:11.673946 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:11.674011 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:11.674011 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:11.674011 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:11.674761 master-0 kubenswrapper[4430]: I1203 14:09:11.674047 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:12.674454 master-0 kubenswrapper[4430]: I1203 14:09:12.674326 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:12.674454 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:12.674454 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:12.674454 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:12.675333 master-0 kubenswrapper[4430]: I1203 14:09:12.675299 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:12.783707 master-0 kubenswrapper[4430]: I1203 14:09:12.783652 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:09:12.784073 master-0 kubenswrapper[4430]: I1203 14:09:12.784057 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:09:13.363670 master-0 kubenswrapper[4430]: I1203 14:09:13.363589 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:09:13.370733 master-0 kubenswrapper[4430]: I1203 14:09:13.370649 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:09:13.370733 master-0 kubenswrapper[4430]: I1203 14:09:13.370719 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:09:13.424274 master-0 kubenswrapper[4430]: I1203 14:09:13.424203 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:09:13.472391 master-0 kubenswrapper[4430]: I1203 14:09:13.472303 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:09:13.472391 master-0 kubenswrapper[4430]: I1203 14:09:13.472364 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:09:13.530587 master-0 kubenswrapper[4430]: I1203 14:09:13.530509 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:09:13.554156 master-0 kubenswrapper[4430]: I1203 14:09:13.554074 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:09:13.673177 master-0 kubenswrapper[4430]: I1203 14:09:13.672991 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:13.673177 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:13.673177 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:13.673177 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:13.673177 master-0 kubenswrapper[4430]: I1203 14:09:13.673069 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:14.078625 master-0 kubenswrapper[4430]: I1203 14:09:14.078543 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:09:14.080004 master-0 kubenswrapper[4430]: I1203 14:09:14.079960 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:09:14.673272 master-0 kubenswrapper[4430]: I1203 14:09:14.673201 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:14.673272 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:14.673272 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:14.673272 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:14.673710 master-0 kubenswrapper[4430]: I1203 14:09:14.673283 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:15.101851 master-0 kubenswrapper[4430]: I1203 14:09:15.101774 4430 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:09:15.102538 master-0 kubenswrapper[4430]: I1203 14:09:15.101862 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:09:15.118514 master-0 kubenswrapper[4430]: I1203 14:09:15.118397 4430 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:09:15.118685 master-0 kubenswrapper[4430]: I1203 14:09:15.118535 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:09:15.118837 master-0 kubenswrapper[4430]: I1203 14:09:15.118796 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:09:15.119164 master-0 kubenswrapper[4430]: I1203 14:09:15.119109 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:09:15.157146 master-0 kubenswrapper[4430]: I1203 14:09:15.157081 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:09:15.674630 master-0 kubenswrapper[4430]: I1203 14:09:15.674514 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:15.674630 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:15.674630 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:15.674630 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:15.675149 master-0 kubenswrapper[4430]: I1203 14:09:15.674648 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:16.093012 master-0 kubenswrapper[4430]: I1203 14:09:16.092909 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:09:16.674448 master-0 kubenswrapper[4430]: I1203 14:09:16.674379 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:16.674448 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:16.674448 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:16.674448 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:16.675068 master-0 kubenswrapper[4430]: I1203 14:09:16.674483 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:17.674513 master-0 kubenswrapper[4430]: I1203 14:09:17.674409 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:17.674513 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:17.674513 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:17.674513 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:17.675209 master-0 kubenswrapper[4430]: I1203 14:09:17.674541 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:18.674728 master-0 kubenswrapper[4430]: I1203 14:09:18.674592 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:18.674728 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:18.674728 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:18.674728 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:18.674728 master-0 kubenswrapper[4430]: I1203 14:09:18.674687 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:19.576398 master-0 kubenswrapper[4430]: E1203 14:09:19.576316 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df\": container with ID starting with 813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df not found: ID does not exist" containerID="813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df" Dec 03 14:09:19.576619 master-0 kubenswrapper[4430]: I1203 14:09:19.576410 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df" err="rpc error: code = NotFound desc = could not find container \"813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df\": container with ID starting with 813dd332500baa3dae137b264e731ab5d5fefc6606d6f8e74cb17b1560c794df not found: ID does not exist" Dec 03 14:09:19.578355 master-0 kubenswrapper[4430]: E1203 14:09:19.578306 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da\": container with ID starting with a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da not found: ID does not exist" containerID="a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da" Dec 03 14:09:19.578458 master-0 kubenswrapper[4430]: I1203 14:09:19.578357 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da" err="rpc error: code = NotFound desc = could not find container \"a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da\": container with ID starting with a4385a7541f9927488a891c23b4996ceca84cd73c8e1b66324c3afa4f9d782da not found: ID does not exist" Dec 03 14:09:19.578884 master-0 kubenswrapper[4430]: E1203 14:09:19.578820 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7\": container with ID starting with d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7 not found: ID does not exist" containerID="d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7" Dec 03 14:09:19.578884 master-0 kubenswrapper[4430]: I1203 14:09:19.578846 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7" err="rpc error: code = NotFound desc = could not find container \"d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7\": container with ID starting with d68dcfb959b0dddfbea8315d0bffa626c1c4fc0c9c58630e2de05efb596926c7 not found: ID does not exist" Dec 03 14:09:19.579689 master-0 kubenswrapper[4430]: E1203 14:09:19.579635 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc\": container with ID starting with 88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc not found: ID does not exist" containerID="88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc" Dec 03 14:09:19.579756 master-0 kubenswrapper[4430]: I1203 14:09:19.579693 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc" err="rpc error: code = NotFound desc = could not find container \"88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc\": container with ID starting with 88a354f91773dbe3823dbf041333b9ec1da17d5142bf375d1bbcbb8d8a0249cc not found: ID does not exist" Dec 03 14:09:19.580162 master-0 kubenswrapper[4430]: E1203 14:09:19.580125 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33\": container with ID starting with a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33 not found: ID does not exist" containerID="a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33" Dec 03 14:09:19.580222 master-0 kubenswrapper[4430]: I1203 14:09:19.580158 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33" err="rpc error: code = NotFound desc = could not find container \"a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33\": container with ID starting with a38a864ba7f51b82dc4a496a126b4c1e7efe0a94d4f6aebb149e4ed57d8a4b33 not found: ID does not exist" Dec 03 14:09:19.580647 master-0 kubenswrapper[4430]: E1203 14:09:19.580617 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac\": container with ID starting with f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac not found: ID does not exist" containerID="f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac" Dec 03 14:09:19.580647 master-0 kubenswrapper[4430]: I1203 14:09:19.580641 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac" err="rpc error: code = NotFound desc = could not find container \"f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac\": container with ID starting with f5af02d5f16eab1071f5ef7ff439fd0d0879735855af5016940cfe8a66bc28ac not found: ID does not exist" Dec 03 14:09:19.583831 master-0 kubenswrapper[4430]: E1203 14:09:19.583774 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f\": container with ID starting with 7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f not found: ID does not exist" containerID="7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f" Dec 03 14:09:19.583925 master-0 kubenswrapper[4430]: I1203 14:09:19.583831 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f" err="rpc error: code = NotFound desc = could not find container \"7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f\": container with ID starting with 7df8bd3ffc93e8d3da9e6a143630f0e6dffdbb70d630f04b6359cdc930fcd07f not found: ID does not exist" Dec 03 14:09:19.674252 master-0 kubenswrapper[4430]: I1203 14:09:19.674141 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:19.674252 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:19.674252 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:19.674252 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:19.675501 master-0 kubenswrapper[4430]: I1203 14:09:19.674321 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:20.674410 master-0 kubenswrapper[4430]: I1203 14:09:20.674105 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:20.674410 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:20.674410 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:20.674410 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:20.675551 master-0 kubenswrapper[4430]: I1203 14:09:20.674486 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:21.674890 master-0 kubenswrapper[4430]: I1203 14:09:21.674812 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:21.674890 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:21.674890 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:21.674890 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:21.675641 master-0 kubenswrapper[4430]: I1203 14:09:21.674921 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:22.674514 master-0 kubenswrapper[4430]: I1203 14:09:22.674389 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:22.674514 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:22.674514 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:22.674514 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:22.674514 master-0 kubenswrapper[4430]: I1203 14:09:22.674496 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:22.943843 master-0 kubenswrapper[4430]: I1203 14:09:22.943626 4430 trace.go:236] Trace[60373485]: "Calculate volume metrics of cache for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" (03-Dec-2025 14:09:19.548) (total time: 3395ms): Dec 03 14:09:22.943843 master-0 kubenswrapper[4430]: Trace[60373485]: [3.395034976s] [3.395034976s] END Dec 03 14:09:23.676063 master-0 kubenswrapper[4430]: I1203 14:09:23.675918 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:23.676063 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:23.676063 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:23.676063 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:23.677134 master-0 kubenswrapper[4430]: I1203 14:09:23.676083 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:24.673975 master-0 kubenswrapper[4430]: I1203 14:09:24.673874 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:24.673975 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:24.673975 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:24.673975 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:24.674328 master-0 kubenswrapper[4430]: I1203 14:09:24.673986 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:25.101106 master-0 kubenswrapper[4430]: I1203 14:09:25.101025 4430 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:09:25.101106 master-0 kubenswrapper[4430]: I1203 14:09:25.101101 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:09:25.118761 master-0 kubenswrapper[4430]: I1203 14:09:25.118673 4430 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:09:25.119120 master-0 kubenswrapper[4430]: I1203 14:09:25.118755 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:09:25.673937 master-0 kubenswrapper[4430]: I1203 14:09:25.673859 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:25.673937 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:25.673937 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:25.673937 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:25.673937 master-0 kubenswrapper[4430]: I1203 14:09:25.673930 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:26.678468 master-0 kubenswrapper[4430]: I1203 14:09:26.674066 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:26.678468 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:26.678468 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:26.678468 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:26.678468 master-0 kubenswrapper[4430]: I1203 14:09:26.674171 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:26.930924 master-0 kubenswrapper[4430]: I1203 14:09:26.930839 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:09:26.931332 master-0 kubenswrapper[4430]: E1203 14:09:26.931206 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:09:26.931332 master-0 kubenswrapper[4430]: E1203 14:09:26.931266 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:09:26.931460 master-0 kubenswrapper[4430]: E1203 14:09:26.931359 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:10:30.931329127 +0000 UTC m=+131.554243213 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:09:27.137032 master-0 kubenswrapper[4430]: I1203 14:09:27.136850 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"3a2b82b4d0c7a4166aa36c185d08744881caf99e18749ad4b38cb6aeda411d00"} Dec 03 14:09:27.675698 master-0 kubenswrapper[4430]: I1203 14:09:27.675529 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:27.675698 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:27.675698 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:27.675698 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:27.675698 master-0 kubenswrapper[4430]: I1203 14:09:27.675610 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:28.147669 master-0 kubenswrapper[4430]: I1203 14:09:28.147581 4430 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="3a2b82b4d0c7a4166aa36c185d08744881caf99e18749ad4b38cb6aeda411d00" exitCode=0 Dec 03 14:09:28.147669 master-0 kubenswrapper[4430]: I1203 14:09:28.147653 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"3a2b82b4d0c7a4166aa36c185d08744881caf99e18749ad4b38cb6aeda411d00"} Dec 03 14:09:28.674200 master-0 kubenswrapper[4430]: I1203 14:09:28.674109 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:28.674200 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:28.674200 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:28.674200 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:28.674200 master-0 kubenswrapper[4430]: I1203 14:09:28.674196 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:29.166203 master-0 kubenswrapper[4430]: I1203 14:09:29.166126 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"670178a9112ded1df5b4df71d85f8bbdc6dc3eaee7dcf3f04f4418c84722164a"} Dec 03 14:09:29.674242 master-0 kubenswrapper[4430]: I1203 14:09:29.674181 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:29.674242 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:29.674242 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:29.674242 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:29.674726 master-0 kubenswrapper[4430]: I1203 14:09:29.674265 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:30.674029 master-0 kubenswrapper[4430]: I1203 14:09:30.673921 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:09:30.674029 master-0 kubenswrapper[4430]: [-]has-synced failed: reason withheld Dec 03 14:09:30.674029 master-0 kubenswrapper[4430]: [+]process-running ok Dec 03 14:09:30.674029 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:09:30.674029 master-0 kubenswrapper[4430]: I1203 14:09:30.674033 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:09:31.674938 master-0 kubenswrapper[4430]: I1203 14:09:31.674837 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:09:31.678871 master-0 kubenswrapper[4430]: I1203 14:09:31.678761 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:09:32.790671 master-0 kubenswrapper[4430]: I1203 14:09:32.790616 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:09:32.797201 master-0 kubenswrapper[4430]: I1203 14:09:32.797143 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:09:33.339549 master-0 kubenswrapper[4430]: I1203 14:09:33.339464 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:09:35.103864 master-0 kubenswrapper[4430]: I1203 14:09:35.103790 4430 patch_prober.go:28] interesting pod/console-c5d7cd7f9-2hp75 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Dec 03 14:09:35.104648 master-0 kubenswrapper[4430]: I1203 14:09:35.103887 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Dec 03 14:09:35.107321 master-0 kubenswrapper[4430]: I1203 14:09:35.107281 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:09:35.118518 master-0 kubenswrapper[4430]: I1203 14:09:35.118413 4430 patch_prober.go:28] interesting pod/console-648d88c756-vswh8 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" start-of-body= Dec 03 14:09:35.118798 master-0 kubenswrapper[4430]: I1203 14:09:35.118534 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.91:8443/health\": dial tcp 10.128.0.91:8443: connect: connection refused" Dec 03 14:09:35.119035 master-0 kubenswrapper[4430]: I1203 14:09:35.118980 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:09:35.119713 master-0 kubenswrapper[4430]: I1203 14:09:35.119671 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:09:35.174168 master-0 kubenswrapper[4430]: I1203 14:09:35.174095 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:09:35.255871 master-0 kubenswrapper[4430]: I1203 14:09:35.255814 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:09:39.568223 master-0 kubenswrapper[4430]: I1203 14:09:39.568137 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-747bdb58b5-mn76f_b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab/oauth-openshift/2.log" Dec 03 14:09:45.106690 master-0 kubenswrapper[4430]: I1203 14:09:45.106613 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:09:45.111000 master-0 kubenswrapper[4430]: I1203 14:09:45.110954 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:09:45.122485 master-0 kubenswrapper[4430]: I1203 14:09:45.122397 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:09:45.126797 master-0 kubenswrapper[4430]: I1203 14:09:45.126750 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:09:48.611548 master-0 kubenswrapper[4430]: I1203 14:09:48.609554 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7479ffdf48-hpdzl_0535e784-8e28-4090-aa2e-df937910767c/authentication-operator/3.log" Dec 03 14:09:49.600847 master-0 kubenswrapper[4430]: I1203 14:09:49.600796 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-54f97f57-rr9px_5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/router/1.log" Dec 03 14:09:51.710243 master-0 kubenswrapper[4430]: I1203 14:09:51.709082 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-57fd58bc7b-kktql_24dfafc9-86a9-450e-ac62-a871138106c0/fix-audit-permissions/1.log" Dec 03 14:09:51.732462 master-0 kubenswrapper[4430]: I1203 14:09:51.729827 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-57fd58bc7b-kktql_24dfafc9-86a9-450e-ac62-a871138106c0/oauth-apiserver/2.log" Dec 03 14:09:51.771478 master-0 kubenswrapper[4430]: I1203 14:09:51.771150 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-77df56447c-vsrxx_a8dc6511-7339-4269-9d43-14ce53bb4e7f/console-operator/2.log" Dec 03 14:09:51.800373 master-0 kubenswrapper[4430]: I1203 14:09:51.800292 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-648d88c756-vswh8_62f94ae7-6043-4761-a16b-e0f072b1364b/console/2.log" Dec 03 14:09:51.830556 master-0 kubenswrapper[4430]: I1203 14:09:51.830487 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:09:51.833575 master-0 kubenswrapper[4430]: I1203 14:09:51.833531 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5d7cd7f9-2hp75_4dd1d142-6569-438d-b0c2-582aed44812d/console/2.log" Dec 03 14:09:51.900832 master-0 kubenswrapper[4430]: I1203 14:09:51.900777 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-6f5db8559b-96ljh_6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d/download-server/2.log" Dec 03 14:09:52.915220 master-0 kubenswrapper[4430]: I1203 14:09:52.915109 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-6b7bcd6566-jh9m8_98392f8e-0285-4bc3-95a9-d29033639ca3/dns-operator/2.log" Dec 03 14:09:52.927515 master-0 kubenswrapper[4430]: I1203 14:09:52.927440 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-6b7bcd6566-jh9m8_98392f8e-0285-4bc3-95a9-d29033639ca3/kube-rbac-proxy/1.log" Dec 03 14:09:53.298730 master-0 kubenswrapper[4430]: I1203 14:09:53.298656 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5m4f8_4669137a-fbc4-41e1-8eeb-5f06b9da2641/dns/1.log" Dec 03 14:09:53.503905 master-0 kubenswrapper[4430]: I1203 14:09:53.503845 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-5m4f8_4669137a-fbc4-41e1-8eeb-5f06b9da2641/kube-rbac-proxy/2.log" Dec 03 14:09:53.903736 master-0 kubenswrapper[4430]: I1203 14:09:53.903660 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-4xlhs_42c95e54-b4ba-4b19-a97c-abcec840ac5d/dns-node-resolver/3.log" Dec 03 14:09:54.300783 master-0 kubenswrapper[4430]: I1203 14:09:54.300656 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-7978bf889c-n64v4_52100521-67e9-40c9-887c-eda6560f06e0/etcd-operator/3.log" Dec 03 14:09:56.101825 master-0 kubenswrapper[4430]: I1203 14:09:56.101665 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/setup/1.log" Dec 03 14:09:56.316450 master-0 kubenswrapper[4430]: I1203 14:09:56.310760 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-ensure-env-vars/1.log" Dec 03 14:09:56.500562 master-0 kubenswrapper[4430]: I1203 14:09:56.500503 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-resources-copy/1.log" Dec 03 14:09:56.699515 master-0 kubenswrapper[4430]: I1203 14:09:56.699386 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcdctl/2.log" Dec 03 14:09:56.912648 master-0 kubenswrapper[4430]: I1203 14:09:56.912438 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd/1.log" Dec 03 14:09:57.104441 master-0 kubenswrapper[4430]: I1203 14:09:57.104327 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-metrics/1.log" Dec 03 14:09:57.303393 master-0 kubenswrapper[4430]: I1203 14:09:57.303289 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-readyz/1.log" Dec 03 14:09:57.503243 master-0 kubenswrapper[4430]: I1203 14:09:57.503181 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-rev/1.log" Dec 03 14:09:57.701132 master-0 kubenswrapper[4430]: E1203 14:09:57.700976 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e\": container with ID starting with 8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e not found: ID does not exist" containerID="8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e" Dec 03 14:09:57.900004 master-0 kubenswrapper[4430]: E1203 14:09:57.899925 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee\": container with ID starting with 329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee not found: ID does not exist" containerID="329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee" Dec 03 14:09:58.303701 master-0 kubenswrapper[4430]: I1203 14:09:58.303598 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-b5dddf8f5-kwb74_b051ae27-7879-448d-b426-4dce76e29739/kube-controller-manager-operator/2.log" Dec 03 14:09:58.504723 master-0 kubenswrapper[4430]: I1203 14:09:58.504613 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_7bce50c457ac1f4721bc81a570dd238a/kube-controller-manager/7.log" Dec 03 14:09:58.904444 master-0 kubenswrapper[4430]: I1203 14:09:58.904338 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_7bce50c457ac1f4721bc81a570dd238a/kube-controller-manager/8.log" Dec 03 14:09:59.210374 master-0 kubenswrapper[4430]: I1203 14:09:59.210215 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_7bce50c457ac1f4721bc81a570dd238a/cluster-policy-controller/2.log" Dec 03 14:09:59.703014 master-0 kubenswrapper[4430]: I1203 14:09:59.702843 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/setup/1.log" Dec 03 14:10:00.043753 master-0 kubenswrapper[4430]: I1203 14:10:00.043644 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/5.log" Dec 03 14:10:00.500116 master-0 kubenswrapper[4430]: I1203 14:10:00.500051 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-74cddd4fb5-phk6r_8c6fa89f-268c-477b-9f04-238d2305cc89/machine-config-controller/2.log" Dec 03 14:10:00.733209 master-0 kubenswrapper[4430]: I1203 14:10:00.733129 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-74cddd4fb5-phk6r_8c6fa89f-268c-477b-9f04-238d2305cc89/kube-rbac-proxy/1.log" Dec 03 14:10:02.580597 master-0 kubenswrapper[4430]: I1203 14:10:02.580545 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2ztl9_799e819f-f4b2-4ac9-8fa4-7d4da7a79285/machine-config-daemon/3.log" Dec 03 14:10:02.587378 master-0 kubenswrapper[4430]: I1203 14:10:02.587312 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2ztl9_799e819f-f4b2-4ac9-8fa4-7d4da7a79285/kube-rbac-proxy/1.log" Dec 03 14:10:02.604017 master-0 kubenswrapper[4430]: I1203 14:10:02.603932 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-664c9d94c9-9vfr4_4df2889c-99f7-402a-9d50-18ccf427179c/machine-config-operator/2.log" Dec 03 14:10:02.611165 master-0 kubenswrapper[4430]: I1203 14:10:02.611113 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-664c9d94c9-9vfr4_4df2889c-99f7-402a-9d50-18ccf427179c/kube-rbac-proxy/1.log" Dec 03 14:10:02.702235 master-0 kubenswrapper[4430]: I1203 14:10:02.702167 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-pvrfs_eecc43f5-708f-4395-98cc-696b243d6321/machine-config-server/3.log" Dec 03 14:10:03.161986 master-0 kubenswrapper[4430]: I1203 14:10:03.161867 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/3.log" Dec 03 14:10:03.508492 master-0 kubenswrapper[4430]: I1203 14:10:03.508106 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-78d987764b-xcs5w_d3200abb-a440-44db-8897-79c809c1d838/controller-manager/2.log" Dec 03 14:10:03.905053 master-0 kubenswrapper[4430]: I1203 14:10:03.904920 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-678c7f799b-4b7nv_1ba502ba-1179-478e-b4b9-f3409320b0ad/route-controller-manager/2.log" Dec 03 14:10:16.885877 master-0 kubenswrapper[4430]: I1203 14:10:16.885639 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-c5d7cd7f9-2hp75" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" containerID="cri-o://c01edad1db506ce1a440eec485368dc53175e475c8c14d77a9938e14bf9c40c8" gracePeriod=15 Dec 03 14:10:17.673927 master-0 kubenswrapper[4430]: I1203 14:10:17.673734 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5d7cd7f9-2hp75_4dd1d142-6569-438d-b0c2-582aed44812d/console/2.log" Dec 03 14:10:17.673927 master-0 kubenswrapper[4430]: I1203 14:10:17.673798 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd1d142-6569-438d-b0c2-582aed44812d" containerID="c01edad1db506ce1a440eec485368dc53175e475c8c14d77a9938e14bf9c40c8" exitCode=2 Dec 03 14:10:17.673927 master-0 kubenswrapper[4430]: I1203 14:10:17.673855 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerDied","Data":"c01edad1db506ce1a440eec485368dc53175e475c8c14d77a9938e14bf9c40c8"} Dec 03 14:10:17.673927 master-0 kubenswrapper[4430]: I1203 14:10:17.673891 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5d7cd7f9-2hp75" event={"ID":"4dd1d142-6569-438d-b0c2-582aed44812d","Type":"ContainerDied","Data":"3b82db4ee3affa2b67ea7317c7e6856037ddf6fe7eac4d19fdf2d717e76f0218"} Dec 03 14:10:17.673927 master-0 kubenswrapper[4430]: I1203 14:10:17.673911 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b82db4ee3affa2b67ea7317c7e6856037ddf6fe7eac4d19fdf2d717e76f0218" Dec 03 14:10:17.680744 master-0 kubenswrapper[4430]: I1203 14:10:17.680718 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5d7cd7f9-2hp75_4dd1d142-6569-438d-b0c2-582aed44812d/console/2.log" Dec 03 14:10:17.680819 master-0 kubenswrapper[4430]: I1203 14:10:17.680797 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:10:17.765830 master-0 kubenswrapper[4430]: I1203 14:10:17.765755 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.766188 master-0 kubenswrapper[4430]: I1203 14:10:17.765854 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.766188 master-0 kubenswrapper[4430]: I1203 14:10:17.765883 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.766188 master-0 kubenswrapper[4430]: I1203 14:10:17.765982 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.766188 master-0 kubenswrapper[4430]: I1203 14:10:17.766078 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.766188 master-0 kubenswrapper[4430]: I1203 14:10:17.766136 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") pod \"4dd1d142-6569-438d-b0c2-582aed44812d\" (UID: \"4dd1d142-6569-438d-b0c2-582aed44812d\") " Dec 03 14:10:17.769016 master-0 kubenswrapper[4430]: I1203 14:10:17.768934 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config" (OuterVolumeSpecName: "console-config") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:10:17.770029 master-0 kubenswrapper[4430]: I1203 14:10:17.769977 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca" (OuterVolumeSpecName: "service-ca") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:10:17.770944 master-0 kubenswrapper[4430]: I1203 14:10:17.770891 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:10:17.772663 master-0 kubenswrapper[4430]: I1203 14:10:17.772614 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:10:17.773629 master-0 kubenswrapper[4430]: I1203 14:10:17.773582 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:10:17.774320 master-0 kubenswrapper[4430]: I1203 14:10:17.774253 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw" (OuterVolumeSpecName: "kube-api-access-gfzrw") pod "4dd1d142-6569-438d-b0c2-582aed44812d" (UID: "4dd1d142-6569-438d-b0c2-582aed44812d"). InnerVolumeSpecName "kube-api-access-gfzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868323 4430 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868376 4430 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-console-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868387 4430 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868400 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfzrw\" (UniqueName: \"kubernetes.io/projected/4dd1d142-6569-438d-b0c2-582aed44812d-kube-api-access-gfzrw\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868410 4430 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4dd1d142-6569-438d-b0c2-582aed44812d-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:17.868403 master-0 kubenswrapper[4430]: I1203 14:10:17.868450 4430 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4dd1d142-6569-438d-b0c2-582aed44812d-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:18.683893 master-0 kubenswrapper[4430]: I1203 14:10:18.683799 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5d7cd7f9-2hp75" Dec 03 14:10:18.730851 master-0 kubenswrapper[4430]: I1203 14:10:18.730746 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:10:18.736097 master-0 kubenswrapper[4430]: I1203 14:10:18.736037 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-c5d7cd7f9-2hp75"] Dec 03 14:10:19.593486 master-0 kubenswrapper[4430]: I1203 14:10:19.592828 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" path="/var/lib/kubelet/pods/4dd1d142-6569-438d-b0c2-582aed44812d/volumes" Dec 03 14:10:19.598633 master-0 kubenswrapper[4430]: E1203 14:10:19.598580 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1\": container with ID starting with bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1 not found: ID does not exist" containerID="bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1" Dec 03 14:10:19.598825 master-0 kubenswrapper[4430]: I1203 14:10:19.598642 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1" err="rpc error: code = NotFound desc = could not find container \"bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1\": container with ID starting with bf0c34d7dcc09bcd99773b55bb4e78896db2c6576b4f6a7f618584facf6c86c1 not found: ID does not exist" Dec 03 14:10:30.952920 master-0 kubenswrapper[4430]: I1203 14:10:30.952837 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:10:30.953752 master-0 kubenswrapper[4430]: E1203 14:10:30.953263 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:10:30.953752 master-0 kubenswrapper[4430]: E1203 14:10:30.953372 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:10:30.953752 master-0 kubenswrapper[4430]: E1203 14:10:30.953554 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:12:32.953502629 +0000 UTC m=+253.576416745 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:10:31.621787 master-0 kubenswrapper[4430]: I1203 14:10:31.621689 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:10:31.622126 master-0 kubenswrapper[4430]: I1203 14:10:31.621836 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:10:42.699866 master-0 kubenswrapper[4430]: I1203 14:10:42.698934 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:10:42.699866 master-0 kubenswrapper[4430]: E1203 14:10:42.699541 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" Dec 03 14:10:42.699866 master-0 kubenswrapper[4430]: I1203 14:10:42.699579 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" Dec 03 14:10:42.699866 master-0 kubenswrapper[4430]: E1203 14:10:42.699637 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98a8d85d3901d33f6fe192bdc7172aa" containerName="startup-monitor" Dec 03 14:10:42.699866 master-0 kubenswrapper[4430]: I1203 14:10:42.699646 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98a8d85d3901d33f6fe192bdc7172aa" containerName="startup-monitor" Dec 03 14:10:42.700834 master-0 kubenswrapper[4430]: I1203 14:10:42.699887 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd1d142-6569-438d-b0c2-582aed44812d" containerName="console" Dec 03 14:10:42.700834 master-0 kubenswrapper[4430]: I1203 14:10:42.699917 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98a8d85d3901d33f6fe192bdc7172aa" containerName="startup-monitor" Dec 03 14:10:42.703330 master-0 kubenswrapper[4430]: I1203 14:10:42.703298 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.712868 master-0 kubenswrapper[4430]: I1203 14:10:42.712792 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:10:42.715012 master-0 kubenswrapper[4430]: I1203 14:10:42.714969 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.724769 master-0 kubenswrapper[4430]: I1203 14:10:42.721433 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:10:42.724769 master-0 kubenswrapper[4430]: I1203 14:10:42.724199 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.724769 master-0 kubenswrapper[4430]: I1203 14:10:42.724390 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:10:42.726324 master-0 kubenswrapper[4430]: I1203 14:10:42.726283 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.731445 master-0 kubenswrapper[4430]: I1203 14:10:42.730164 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:10:42.740437 master-0 kubenswrapper[4430]: I1203 14:10:42.740366 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:10:42.745122 master-0 kubenswrapper[4430]: I1203 14:10:42.745076 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:10:42.749894 master-0 kubenswrapper[4430]: I1203 14:10:42.749846 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:10:42.873534 master-0 kubenswrapper[4430]: I1203 14:10:42.873451 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.873534 master-0 kubenswrapper[4430]: I1203 14:10:42.873521 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlrqd\" (UniqueName: \"kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.873534 master-0 kubenswrapper[4430]: I1203 14:10:42.873544 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.873534 master-0 kubenswrapper[4430]: I1203 14:10:42.873563 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.873920 master-0 kubenswrapper[4430]: I1203 14:10:42.873580 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9wnq\" (UniqueName: \"kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.873920 master-0 kubenswrapper[4430]: I1203 14:10:42.873604 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.874596 master-0 kubenswrapper[4430]: I1203 14:10:42.874497 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.874801 master-0 kubenswrapper[4430]: I1203 14:10:42.874767 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jcm\" (UniqueName: \"kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.874867 master-0 kubenswrapper[4430]: I1203 14:10:42.874842 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29cbw\" (UniqueName: \"kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.874905 master-0 kubenswrapper[4430]: I1203 14:10:42.874896 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.874959 master-0 kubenswrapper[4430]: I1203 14:10:42.874922 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.875004 master-0 kubenswrapper[4430]: I1203 14:10:42.874960 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.976487 master-0 kubenswrapper[4430]: I1203 14:10:42.976350 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jcm\" (UniqueName: \"kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.976807 master-0 kubenswrapper[4430]: I1203 14:10:42.976526 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29cbw\" (UniqueName: \"kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.976807 master-0 kubenswrapper[4430]: I1203 14:10:42.976609 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.976807 master-0 kubenswrapper[4430]: I1203 14:10:42.976656 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.976807 master-0 kubenswrapper[4430]: I1203 14:10:42.976711 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.977018 master-0 kubenswrapper[4430]: I1203 14:10:42.976834 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.977084 master-0 kubenswrapper[4430]: I1203 14:10:42.977051 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.977130 master-0 kubenswrapper[4430]: I1203 14:10:42.977113 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlrqd\" (UniqueName: \"kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.977185 master-0 kubenswrapper[4430]: I1203 14:10:42.977156 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.977238 master-0 kubenswrapper[4430]: I1203 14:10:42.977191 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9wnq\" (UniqueName: \"kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.977282 master-0 kubenswrapper[4430]: I1203 14:10:42.977241 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.977334 master-0 kubenswrapper[4430]: I1203 14:10:42.977286 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.977536 master-0 kubenswrapper[4430]: I1203 14:10:42.977449 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.977536 master-0 kubenswrapper[4430]: I1203 14:10:42.977480 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.977651 master-0 kubenswrapper[4430]: I1203 14:10:42.977562 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.977996 master-0 kubenswrapper[4430]: I1203 14:10:42.977954 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.978076 master-0 kubenswrapper[4430]: I1203 14:10:42.977959 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.978076 master-0 kubenswrapper[4430]: I1203 14:10:42.978059 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.978214 master-0 kubenswrapper[4430]: I1203 14:10:42.978065 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:42.978314 master-0 kubenswrapper[4430]: I1203 14:10:42.978281 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.994788 master-0 kubenswrapper[4430]: I1203 14:10:42.994731 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9wnq\" (UniqueName: \"kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq\") pod \"community-operators-qb87p\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:42.995784 master-0 kubenswrapper[4430]: I1203 14:10:42.995735 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29cbw\" (UniqueName: \"kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw\") pod \"redhat-marketplace-vjtzs\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:42.995904 master-0 kubenswrapper[4430]: I1203 14:10:42.995864 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jcm\" (UniqueName: \"kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm\") pod \"redhat-operators-5zhkp\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:42.996661 master-0 kubenswrapper[4430]: I1203 14:10:42.996599 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlrqd\" (UniqueName: \"kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd\") pod \"certified-operators-2ts27\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:43.041556 master-0 kubenswrapper[4430]: I1203 14:10:43.041481 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:43.064580 master-0 kubenswrapper[4430]: I1203 14:10:43.064480 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:43.090765 master-0 kubenswrapper[4430]: I1203 14:10:43.090663 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:43.102524 master-0 kubenswrapper[4430]: I1203 14:10:43.102452 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:43.482765 master-0 kubenswrapper[4430]: I1203 14:10:43.482711 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:10:43.485253 master-0 kubenswrapper[4430]: W1203 14:10:43.485226 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92d55c3f_3624_43fe_b81f_28cdb0f2d449.slice/crio-b6dbf6d433bbd457f67b9124199e866a1e10dfedf2d9b725abcc774dcd2193ff WatchSource:0}: Error finding container b6dbf6d433bbd457f67b9124199e866a1e10dfedf2d9b725abcc774dcd2193ff: Status 404 returned error can't find the container with id b6dbf6d433bbd457f67b9124199e866a1e10dfedf2d9b725abcc774dcd2193ff Dec 03 14:10:43.546569 master-0 kubenswrapper[4430]: I1203 14:10:43.546518 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:10:43.550159 master-0 kubenswrapper[4430]: I1203 14:10:43.550127 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:10:43.553277 master-0 kubenswrapper[4430]: I1203 14:10:43.553222 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:10:43.556668 master-0 kubenswrapper[4430]: W1203 14:10:43.556623 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cd2fcaa_4163_46ea_9af8_631006e2b7ca.slice/crio-adaa3435e0fed7f6c724c832ce41ac468b5f8507ee5ba9b5ed4aace7853ab831 WatchSource:0}: Error finding container adaa3435e0fed7f6c724c832ce41ac468b5f8507ee5ba9b5ed4aace7853ab831: Status 404 returned error can't find the container with id adaa3435e0fed7f6c724c832ce41ac468b5f8507ee5ba9b5ed4aace7853ab831 Dec 03 14:10:43.558291 master-0 kubenswrapper[4430]: W1203 14:10:43.558246 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf3821f0_0379_4aba_af13_368b76a2d016.slice/crio-6462d24e8b63365d23fe93b71e9023c7c68008d7cb26514661eba1c478c259f9 WatchSource:0}: Error finding container 6462d24e8b63365d23fe93b71e9023c7c68008d7cb26514661eba1c478c259f9: Status 404 returned error can't find the container with id 6462d24e8b63365d23fe93b71e9023c7c68008d7cb26514661eba1c478c259f9 Dec 03 14:10:43.558819 master-0 kubenswrapper[4430]: W1203 14:10:43.558791 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d3676e4_94ef_4595_b106_1a354cceccea.slice/crio-0bd91054760a895b4bc4a66e37b739d3701233a8af2e1be2ebd9f4273ca7c391 WatchSource:0}: Error finding container 0bd91054760a895b4bc4a66e37b739d3701233a8af2e1be2ebd9f4273ca7c391: Status 404 returned error can't find the container with id 0bd91054760a895b4bc4a66e37b739d3701233a8af2e1be2ebd9f4273ca7c391 Dec 03 14:10:43.949964 master-0 kubenswrapper[4430]: I1203 14:10:43.949887 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf3821f0-0379-4aba-af13-368b76a2d016" containerID="047c245ba1c2c62c5892ae6e9c30ff6cd2534604a523d21c5897fa5f879462c4" exitCode=0 Dec 03 14:10:43.950667 master-0 kubenswrapper[4430]: I1203 14:10:43.949977 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerDied","Data":"047c245ba1c2c62c5892ae6e9c30ff6cd2534604a523d21c5897fa5f879462c4"} Dec 03 14:10:43.950667 master-0 kubenswrapper[4430]: I1203 14:10:43.950011 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerStarted","Data":"6462d24e8b63365d23fe93b71e9023c7c68008d7cb26514661eba1c478c259f9"} Dec 03 14:10:43.955590 master-0 kubenswrapper[4430]: I1203 14:10:43.955533 4430 generic.go:334] "Generic (PLEG): container finished" podID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerID="38b82b078f439f37c3bcd8cdc163286d3319f68066493aa187763d027e684fab" exitCode=0 Dec 03 14:10:43.955758 master-0 kubenswrapper[4430]: I1203 14:10:43.955619 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerDied","Data":"38b82b078f439f37c3bcd8cdc163286d3319f68066493aa187763d027e684fab"} Dec 03 14:10:43.955758 master-0 kubenswrapper[4430]: I1203 14:10:43.955654 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerStarted","Data":"adaa3435e0fed7f6c724c832ce41ac468b5f8507ee5ba9b5ed4aace7853ab831"} Dec 03 14:10:43.958377 master-0 kubenswrapper[4430]: I1203 14:10:43.958341 4430 generic.go:334] "Generic (PLEG): container finished" podID="7d3676e4-94ef-4595-b106-1a354cceccea" containerID="b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049" exitCode=0 Dec 03 14:10:43.958626 master-0 kubenswrapper[4430]: I1203 14:10:43.958398 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerDied","Data":"b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049"} Dec 03 14:10:43.958626 master-0 kubenswrapper[4430]: I1203 14:10:43.958442 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerStarted","Data":"0bd91054760a895b4bc4a66e37b739d3701233a8af2e1be2ebd9f4273ca7c391"} Dec 03 14:10:43.961105 master-0 kubenswrapper[4430]: I1203 14:10:43.960454 4430 generic.go:334] "Generic (PLEG): container finished" podID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerID="7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9" exitCode=0 Dec 03 14:10:43.961105 master-0 kubenswrapper[4430]: I1203 14:10:43.960496 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerDied","Data":"7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9"} Dec 03 14:10:43.961105 master-0 kubenswrapper[4430]: I1203 14:10:43.960524 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerStarted","Data":"b6dbf6d433bbd457f67b9124199e866a1e10dfedf2d9b725abcc774dcd2193ff"} Dec 03 14:10:45.978947 master-0 kubenswrapper[4430]: I1203 14:10:45.978855 4430 generic.go:334] "Generic (PLEG): container finished" podID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerID="8713241f3cfe76a14fd1b5591f20a1f0a4f7f8b617ed6bfd733d8d7e5b554d05" exitCode=0 Dec 03 14:10:45.978947 master-0 kubenswrapper[4430]: I1203 14:10:45.978923 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerDied","Data":"8713241f3cfe76a14fd1b5591f20a1f0a4f7f8b617ed6bfd733d8d7e5b554d05"} Dec 03 14:10:45.981138 master-0 kubenswrapper[4430]: I1203 14:10:45.981098 4430 generic.go:334] "Generic (PLEG): container finished" podID="7d3676e4-94ef-4595-b106-1a354cceccea" containerID="d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3" exitCode=0 Dec 03 14:10:45.981389 master-0 kubenswrapper[4430]: I1203 14:10:45.981155 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerDied","Data":"d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3"} Dec 03 14:10:45.984605 master-0 kubenswrapper[4430]: I1203 14:10:45.984533 4430 generic.go:334] "Generic (PLEG): container finished" podID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerID="90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3" exitCode=0 Dec 03 14:10:45.984720 master-0 kubenswrapper[4430]: I1203 14:10:45.984688 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerDied","Data":"90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3"} Dec 03 14:10:45.987448 master-0 kubenswrapper[4430]: I1203 14:10:45.987370 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf3821f0-0379-4aba-af13-368b76a2d016" containerID="ad1631adcf8f823c1ab7e5d7a17f72f93919bc9a378782381890948bf47fc7b2" exitCode=0 Dec 03 14:10:45.987572 master-0 kubenswrapper[4430]: I1203 14:10:45.987469 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerDied","Data":"ad1631adcf8f823c1ab7e5d7a17f72f93919bc9a378782381890948bf47fc7b2"} Dec 03 14:10:46.997041 master-0 kubenswrapper[4430]: I1203 14:10:46.996980 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerStarted","Data":"7a7e77364862b5ad56fe58593d5a926f48641fea8ff0e3c28f2cf7fd47cbb763"} Dec 03 14:10:46.999269 master-0 kubenswrapper[4430]: I1203 14:10:46.999232 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerStarted","Data":"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305"} Dec 03 14:10:47.001698 master-0 kubenswrapper[4430]: I1203 14:10:47.001644 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerStarted","Data":"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef"} Dec 03 14:10:47.004045 master-0 kubenswrapper[4430]: I1203 14:10:47.003997 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerStarted","Data":"5e6960dfd595da93060e8eeb09440083b5535bc4215160643d69a14139540910"} Dec 03 14:10:47.026787 master-0 kubenswrapper[4430]: I1203 14:10:47.026690 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2ts27" podStartSLOduration=103.532193793 podStartE2EDuration="1m46.026671644s" podCreationTimestamp="2025-12-03 14:09:01 +0000 UTC" firstStartedPulling="2025-12-03 14:10:43.957041882 +0000 UTC m=+144.579955958" lastFinishedPulling="2025-12-03 14:10:46.451519733 +0000 UTC m=+147.074433809" observedRunningTime="2025-12-03 14:10:47.022348007 +0000 UTC m=+147.645262083" watchObservedRunningTime="2025-12-03 14:10:47.026671644 +0000 UTC m=+147.649585720" Dec 03 14:10:47.043375 master-0 kubenswrapper[4430]: I1203 14:10:47.043305 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zhkp" podStartSLOduration=101.608430736 podStartE2EDuration="1m44.04328532s" podCreationTimestamp="2025-12-03 14:09:03 +0000 UTC" firstStartedPulling="2025-12-03 14:10:43.96003933 +0000 UTC m=+144.582953406" lastFinishedPulling="2025-12-03 14:10:46.394893914 +0000 UTC m=+147.017807990" observedRunningTime="2025-12-03 14:10:47.041886609 +0000 UTC m=+147.664800695" watchObservedRunningTime="2025-12-03 14:10:47.04328532 +0000 UTC m=+147.666199396" Dec 03 14:10:47.071389 master-0 kubenswrapper[4430]: I1203 14:10:47.071291 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qb87p" podStartSLOduration=103.616007809 podStartE2EDuration="1m46.07126807s" podCreationTimestamp="2025-12-03 14:09:01 +0000 UTC" firstStartedPulling="2025-12-03 14:10:43.9518325 +0000 UTC m=+144.574746576" lastFinishedPulling="2025-12-03 14:10:46.407092761 +0000 UTC m=+147.030006837" observedRunningTime="2025-12-03 14:10:47.066038527 +0000 UTC m=+147.688952623" watchObservedRunningTime="2025-12-03 14:10:47.07126807 +0000 UTC m=+147.694182146" Dec 03 14:10:47.086886 master-0 kubenswrapper[4430]: I1203 14:10:47.086801 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vjtzs" podStartSLOduration=101.440749204 podStartE2EDuration="1m44.086777365s" podCreationTimestamp="2025-12-03 14:09:03 +0000 UTC" firstStartedPulling="2025-12-03 14:10:43.962057279 +0000 UTC m=+144.584971355" lastFinishedPulling="2025-12-03 14:10:46.60808544 +0000 UTC m=+147.230999516" observedRunningTime="2025-12-03 14:10:47.085674082 +0000 UTC m=+147.708588178" watchObservedRunningTime="2025-12-03 14:10:47.086777365 +0000 UTC m=+147.709691441" Dec 03 14:10:53.042090 master-0 kubenswrapper[4430]: I1203 14:10:53.042008 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:53.042090 master-0 kubenswrapper[4430]: I1203 14:10:53.042078 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:53.064955 master-0 kubenswrapper[4430]: I1203 14:10:53.064876 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:53.065177 master-0 kubenswrapper[4430]: I1203 14:10:53.064992 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:53.092468 master-0 kubenswrapper[4430]: I1203 14:10:53.092369 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:53.092468 master-0 kubenswrapper[4430]: I1203 14:10:53.092454 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:53.093810 master-0 kubenswrapper[4430]: I1203 14:10:53.093745 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:53.103011 master-0 kubenswrapper[4430]: I1203 14:10:53.102913 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:53.103212 master-0 kubenswrapper[4430]: I1203 14:10:53.103033 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:53.112318 master-0 kubenswrapper[4430]: I1203 14:10:53.112253 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:53.137691 master-0 kubenswrapper[4430]: I1203 14:10:53.137638 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:53.148577 master-0 kubenswrapper[4430]: I1203 14:10:53.148132 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:54.094384 master-0 kubenswrapper[4430]: I1203 14:10:54.094318 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:54.096892 master-0 kubenswrapper[4430]: I1203 14:10:54.096842 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:54.098402 master-0 kubenswrapper[4430]: I1203 14:10:54.098363 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:10:54.098897 master-0 kubenswrapper[4430]: I1203 14:10:54.098867 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:55.498716 master-0 kubenswrapper[4430]: I1203 14:10:55.498538 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:10:56.070356 master-0 kubenswrapper[4430]: I1203 14:10:56.070305 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2ts27" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="registry-server" containerID="cri-o://7a7e77364862b5ad56fe58593d5a926f48641fea8ff0e3c28f2cf7fd47cbb763" gracePeriod=2 Dec 03 14:10:58.054194 master-0 kubenswrapper[4430]: I1203 14:10:58.050838 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:10:58.055049 master-0 kubenswrapper[4430]: I1203 14:10:58.054965 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qb87p" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="registry-server" containerID="cri-o://5e6960dfd595da93060e8eeb09440083b5535bc4215160643d69a14139540910" gracePeriod=2 Dec 03 14:10:58.086784 master-0 kubenswrapper[4430]: I1203 14:10:58.086719 4430 generic.go:334] "Generic (PLEG): container finished" podID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerID="7a7e77364862b5ad56fe58593d5a926f48641fea8ff0e3c28f2cf7fd47cbb763" exitCode=0 Dec 03 14:10:58.086784 master-0 kubenswrapper[4430]: I1203 14:10:58.086772 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerDied","Data":"7a7e77364862b5ad56fe58593d5a926f48641fea8ff0e3c28f2cf7fd47cbb763"} Dec 03 14:10:59.432726 master-0 kubenswrapper[4430]: I1203 14:10:59.432636 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:10:59.433557 master-0 kubenswrapper[4430]: I1203 14:10:59.432995 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zhkp" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="registry-server" containerID="cri-o://b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef" gracePeriod=2 Dec 03 14:10:59.439650 master-0 kubenswrapper[4430]: I1203 14:10:59.439549 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:10:59.439996 master-0 kubenswrapper[4430]: I1203 14:10:59.439933 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vjtzs" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="registry-server" containerID="cri-o://a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305" gracePeriod=2 Dec 03 14:10:59.449446 master-0 kubenswrapper[4430]: I1203 14:10:59.449335 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:10:59.564182 master-0 kubenswrapper[4430]: I1203 14:10:59.564044 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content\") pod \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " Dec 03 14:10:59.564344 master-0 kubenswrapper[4430]: I1203 14:10:59.564231 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlrqd\" (UniqueName: \"kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd\") pod \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " Dec 03 14:10:59.564344 master-0 kubenswrapper[4430]: I1203 14:10:59.564258 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities\") pod \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\" (UID: \"5cd2fcaa-4163-46ea-9af8-631006e2b7ca\") " Dec 03 14:10:59.565176 master-0 kubenswrapper[4430]: I1203 14:10:59.565135 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities" (OuterVolumeSpecName: "utilities") pod "5cd2fcaa-4163-46ea-9af8-631006e2b7ca" (UID: "5cd2fcaa-4163-46ea-9af8-631006e2b7ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:10:59.568091 master-0 kubenswrapper[4430]: I1203 14:10:59.568023 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd" (OuterVolumeSpecName: "kube-api-access-mlrqd") pod "5cd2fcaa-4163-46ea-9af8-631006e2b7ca" (UID: "5cd2fcaa-4163-46ea-9af8-631006e2b7ca"). InnerVolumeSpecName "kube-api-access-mlrqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:10:59.617646 master-0 kubenswrapper[4430]: I1203 14:10:59.617014 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cd2fcaa-4163-46ea-9af8-631006e2b7ca" (UID: "5cd2fcaa-4163-46ea-9af8-631006e2b7ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:10:59.666522 master-0 kubenswrapper[4430]: I1203 14:10:59.666391 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlrqd\" (UniqueName: \"kubernetes.io/projected/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-kube-api-access-mlrqd\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:59.666522 master-0 kubenswrapper[4430]: I1203 14:10:59.666447 4430 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:59.666522 master-0 kubenswrapper[4430]: I1203 14:10:59.666457 4430 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cd2fcaa-4163-46ea-9af8-631006e2b7ca-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:10:59.842635 master-0 kubenswrapper[4430]: I1203 14:10:59.842569 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:10:59.962271 master-0 kubenswrapper[4430]: I1203 14:10:59.962207 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:10:59.973580 master-0 kubenswrapper[4430]: I1203 14:10:59.973531 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities\") pod \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " Dec 03 14:10:59.973821 master-0 kubenswrapper[4430]: I1203 14:10:59.973619 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content\") pod \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " Dec 03 14:10:59.973821 master-0 kubenswrapper[4430]: I1203 14:10:59.973668 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29cbw\" (UniqueName: \"kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw\") pod \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\" (UID: \"92d55c3f-3624-43fe-b81f-28cdb0f2d449\") " Dec 03 14:10:59.974723 master-0 kubenswrapper[4430]: I1203 14:10:59.974680 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities" (OuterVolumeSpecName: "utilities") pod "92d55c3f-3624-43fe-b81f-28cdb0f2d449" (UID: "92d55c3f-3624-43fe-b81f-28cdb0f2d449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:10:59.976794 master-0 kubenswrapper[4430]: I1203 14:10:59.976757 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw" (OuterVolumeSpecName: "kube-api-access-29cbw") pod "92d55c3f-3624-43fe-b81f-28cdb0f2d449" (UID: "92d55c3f-3624-43fe-b81f-28cdb0f2d449"). InnerVolumeSpecName "kube-api-access-29cbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:11:00.008855 master-0 kubenswrapper[4430]: I1203 14:11:00.008801 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92d55c3f-3624-43fe-b81f-28cdb0f2d449" (UID: "92d55c3f-3624-43fe-b81f-28cdb0f2d449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:11:00.075636 master-0 kubenswrapper[4430]: I1203 14:11:00.075594 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities\") pod \"7d3676e4-94ef-4595-b106-1a354cceccea\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " Dec 03 14:11:00.075907 master-0 kubenswrapper[4430]: I1203 14:11:00.075879 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7jcm\" (UniqueName: \"kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm\") pod \"7d3676e4-94ef-4595-b106-1a354cceccea\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " Dec 03 14:11:00.075964 master-0 kubenswrapper[4430]: I1203 14:11:00.075938 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content\") pod \"7d3676e4-94ef-4595-b106-1a354cceccea\" (UID: \"7d3676e4-94ef-4595-b106-1a354cceccea\") " Dec 03 14:11:00.076351 master-0 kubenswrapper[4430]: I1203 14:11:00.076321 4430 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.076351 master-0 kubenswrapper[4430]: I1203 14:11:00.076344 4430 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92d55c3f-3624-43fe-b81f-28cdb0f2d449-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.076497 master-0 kubenswrapper[4430]: I1203 14:11:00.076359 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29cbw\" (UniqueName: \"kubernetes.io/projected/92d55c3f-3624-43fe-b81f-28cdb0f2d449-kube-api-access-29cbw\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.078021 master-0 kubenswrapper[4430]: I1203 14:11:00.077984 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities" (OuterVolumeSpecName: "utilities") pod "7d3676e4-94ef-4595-b106-1a354cceccea" (UID: "7d3676e4-94ef-4595-b106-1a354cceccea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:11:00.081713 master-0 kubenswrapper[4430]: I1203 14:11:00.081668 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm" (OuterVolumeSpecName: "kube-api-access-q7jcm") pod "7d3676e4-94ef-4595-b106-1a354cceccea" (UID: "7d3676e4-94ef-4595-b106-1a354cceccea"). InnerVolumeSpecName "kube-api-access-q7jcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:11:00.102784 master-0 kubenswrapper[4430]: I1203 14:11:00.102696 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2ts27" event={"ID":"5cd2fcaa-4163-46ea-9af8-631006e2b7ca","Type":"ContainerDied","Data":"adaa3435e0fed7f6c724c832ce41ac468b5f8507ee5ba9b5ed4aace7853ab831"} Dec 03 14:11:00.103073 master-0 kubenswrapper[4430]: I1203 14:11:00.102815 4430 scope.go:117] "RemoveContainer" containerID="7a7e77364862b5ad56fe58593d5a926f48641fea8ff0e3c28f2cf7fd47cbb763" Dec 03 14:11:00.103073 master-0 kubenswrapper[4430]: I1203 14:11:00.103032 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2ts27" Dec 03 14:11:00.108046 master-0 kubenswrapper[4430]: I1203 14:11:00.107983 4430 generic.go:334] "Generic (PLEG): container finished" podID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerID="a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305" exitCode=0 Dec 03 14:11:00.108238 master-0 kubenswrapper[4430]: I1203 14:11:00.108058 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vjtzs" Dec 03 14:11:00.108471 master-0 kubenswrapper[4430]: I1203 14:11:00.108388 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerDied","Data":"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305"} Dec 03 14:11:00.108698 master-0 kubenswrapper[4430]: I1203 14:11:00.108660 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vjtzs" event={"ID":"92d55c3f-3624-43fe-b81f-28cdb0f2d449","Type":"ContainerDied","Data":"b6dbf6d433bbd457f67b9124199e866a1e10dfedf2d9b725abcc774dcd2193ff"} Dec 03 14:11:00.111699 master-0 kubenswrapper[4430]: I1203 14:11:00.111663 4430 generic.go:334] "Generic (PLEG): container finished" podID="7d3676e4-94ef-4595-b106-1a354cceccea" containerID="b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef" exitCode=0 Dec 03 14:11:00.111925 master-0 kubenswrapper[4430]: I1203 14:11:00.111781 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhkp" Dec 03 14:11:00.112206 master-0 kubenswrapper[4430]: I1203 14:11:00.111769 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerDied","Data":"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef"} Dec 03 14:11:00.112304 master-0 kubenswrapper[4430]: I1203 14:11:00.112248 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhkp" event={"ID":"7d3676e4-94ef-4595-b106-1a354cceccea","Type":"ContainerDied","Data":"0bd91054760a895b4bc4a66e37b739d3701233a8af2e1be2ebd9f4273ca7c391"} Dec 03 14:11:00.115093 master-0 kubenswrapper[4430]: I1203 14:11:00.115040 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf3821f0-0379-4aba-af13-368b76a2d016" containerID="5e6960dfd595da93060e8eeb09440083b5535bc4215160643d69a14139540910" exitCode=0 Dec 03 14:11:00.115093 master-0 kubenswrapper[4430]: I1203 14:11:00.115078 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerDied","Data":"5e6960dfd595da93060e8eeb09440083b5535bc4215160643d69a14139540910"} Dec 03 14:11:00.128707 master-0 kubenswrapper[4430]: I1203 14:11:00.126757 4430 scope.go:117] "RemoveContainer" containerID="8713241f3cfe76a14fd1b5591f20a1f0a4f7f8b617ed6bfd733d8d7e5b554d05" Dec 03 14:11:00.153920 master-0 kubenswrapper[4430]: I1203 14:11:00.153855 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:11:00.163965 master-0 kubenswrapper[4430]: I1203 14:11:00.163901 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2ts27"] Dec 03 14:11:00.178872 master-0 kubenswrapper[4430]: I1203 14:11:00.178761 4430 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.178872 master-0 kubenswrapper[4430]: I1203 14:11:00.178808 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7jcm\" (UniqueName: \"kubernetes.io/projected/7d3676e4-94ef-4595-b106-1a354cceccea-kube-api-access-q7jcm\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.181772 master-0 kubenswrapper[4430]: I1203 14:11:00.181620 4430 scope.go:117] "RemoveContainer" containerID="38b82b078f439f37c3bcd8cdc163286d3319f68066493aa187763d027e684fab" Dec 03 14:11:00.191459 master-0 kubenswrapper[4430]: I1203 14:11:00.191394 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:11:00.199912 master-0 kubenswrapper[4430]: I1203 14:11:00.199809 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d3676e4-94ef-4595-b106-1a354cceccea" (UID: "7d3676e4-94ef-4595-b106-1a354cceccea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:11:00.200061 master-0 kubenswrapper[4430]: I1203 14:11:00.199958 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vjtzs"] Dec 03 14:11:00.205567 master-0 kubenswrapper[4430]: I1203 14:11:00.204737 4430 scope.go:117] "RemoveContainer" containerID="a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305" Dec 03 14:11:00.223785 master-0 kubenswrapper[4430]: I1203 14:11:00.223736 4430 scope.go:117] "RemoveContainer" containerID="90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3" Dec 03 14:11:00.239002 master-0 kubenswrapper[4430]: I1203 14:11:00.238966 4430 scope.go:117] "RemoveContainer" containerID="7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9" Dec 03 14:11:00.269748 master-0 kubenswrapper[4430]: I1203 14:11:00.269689 4430 scope.go:117] "RemoveContainer" containerID="a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305" Dec 03 14:11:00.270560 master-0 kubenswrapper[4430]: E1203 14:11:00.270512 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305\": container with ID starting with a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305 not found: ID does not exist" containerID="a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305" Dec 03 14:11:00.270635 master-0 kubenswrapper[4430]: I1203 14:11:00.270566 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305"} err="failed to get container status \"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305\": rpc error: code = NotFound desc = could not find container \"a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305\": container with ID starting with a246a7d7e925c3d798214885efa20f54b8cf0bc60bf1081f7cfd7043390ca305 not found: ID does not exist" Dec 03 14:11:00.270635 master-0 kubenswrapper[4430]: I1203 14:11:00.270610 4430 scope.go:117] "RemoveContainer" containerID="90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3" Dec 03 14:11:00.271084 master-0 kubenswrapper[4430]: E1203 14:11:00.271043 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3\": container with ID starting with 90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3 not found: ID does not exist" containerID="90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3" Dec 03 14:11:00.271084 master-0 kubenswrapper[4430]: I1203 14:11:00.271070 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3"} err="failed to get container status \"90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3\": rpc error: code = NotFound desc = could not find container \"90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3\": container with ID starting with 90fc30f409c20e16ce48f0e0e5312c2e7c02d050e5c787875f5c1447ea9868f3 not found: ID does not exist" Dec 03 14:11:00.271290 master-0 kubenswrapper[4430]: I1203 14:11:00.271090 4430 scope.go:117] "RemoveContainer" containerID="7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9" Dec 03 14:11:00.271624 master-0 kubenswrapper[4430]: E1203 14:11:00.271583 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9\": container with ID starting with 7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9 not found: ID does not exist" containerID="7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9" Dec 03 14:11:00.271691 master-0 kubenswrapper[4430]: I1203 14:11:00.271622 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9"} err="failed to get container status \"7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9\": rpc error: code = NotFound desc = could not find container \"7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9\": container with ID starting with 7a4773207c00dcfd5d2dcc21ef2d82bd24e18a5204ae742248241ef700f1afa9 not found: ID does not exist" Dec 03 14:11:00.271691 master-0 kubenswrapper[4430]: I1203 14:11:00.271639 4430 scope.go:117] "RemoveContainer" containerID="b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef" Dec 03 14:11:00.279855 master-0 kubenswrapper[4430]: I1203 14:11:00.279818 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:11:00.279942 master-0 kubenswrapper[4430]: I1203 14:11:00.279877 4430 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d3676e4-94ef-4595-b106-1a354cceccea-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.287844 master-0 kubenswrapper[4430]: I1203 14:11:00.287816 4430 scope.go:117] "RemoveContainer" containerID="d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3" Dec 03 14:11:00.307775 master-0 kubenswrapper[4430]: I1203 14:11:00.307694 4430 scope.go:117] "RemoveContainer" containerID="b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049" Dec 03 14:11:00.323728 master-0 kubenswrapper[4430]: I1203 14:11:00.323676 4430 scope.go:117] "RemoveContainer" containerID="b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef" Dec 03 14:11:00.324698 master-0 kubenswrapper[4430]: E1203 14:11:00.324650 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef\": container with ID starting with b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef not found: ID does not exist" containerID="b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef" Dec 03 14:11:00.324787 master-0 kubenswrapper[4430]: I1203 14:11:00.324749 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef"} err="failed to get container status \"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef\": rpc error: code = NotFound desc = could not find container \"b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef\": container with ID starting with b14b2486d9c56ae3ac785eb715dc616c6aec31685c76b0e40c148c6dc3d4d7ef not found: ID does not exist" Dec 03 14:11:00.324912 master-0 kubenswrapper[4430]: I1203 14:11:00.324794 4430 scope.go:117] "RemoveContainer" containerID="d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3" Dec 03 14:11:00.325257 master-0 kubenswrapper[4430]: E1203 14:11:00.325208 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3\": container with ID starting with d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3 not found: ID does not exist" containerID="d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3" Dec 03 14:11:00.325326 master-0 kubenswrapper[4430]: I1203 14:11:00.325270 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3"} err="failed to get container status \"d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3\": rpc error: code = NotFound desc = could not find container \"d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3\": container with ID starting with d54e8fab8ef605ca179b02ffd92d30deb0ee74f18073f36195065dba8aed65a3 not found: ID does not exist" Dec 03 14:11:00.325326 master-0 kubenswrapper[4430]: I1203 14:11:00.325312 4430 scope.go:117] "RemoveContainer" containerID="b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049" Dec 03 14:11:00.325738 master-0 kubenswrapper[4430]: E1203 14:11:00.325689 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049\": container with ID starting with b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049 not found: ID does not exist" containerID="b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049" Dec 03 14:11:00.325738 master-0 kubenswrapper[4430]: I1203 14:11:00.325723 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049"} err="failed to get container status \"b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049\": rpc error: code = NotFound desc = could not find container \"b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049\": container with ID starting with b31d221b6d549557d211fb742bd063b551c710f2fc6bab415aff86e00843d049 not found: ID does not exist" Dec 03 14:11:00.483190 master-0 kubenswrapper[4430]: I1203 14:11:00.483092 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9wnq\" (UniqueName: \"kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq\") pod \"bf3821f0-0379-4aba-af13-368b76a2d016\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " Dec 03 14:11:00.484233 master-0 kubenswrapper[4430]: I1203 14:11:00.483339 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities\") pod \"bf3821f0-0379-4aba-af13-368b76a2d016\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " Dec 03 14:11:00.484233 master-0 kubenswrapper[4430]: I1203 14:11:00.483481 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content\") pod \"bf3821f0-0379-4aba-af13-368b76a2d016\" (UID: \"bf3821f0-0379-4aba-af13-368b76a2d016\") " Dec 03 14:11:00.485468 master-0 kubenswrapper[4430]: I1203 14:11:00.485386 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities" (OuterVolumeSpecName: "utilities") pod "bf3821f0-0379-4aba-af13-368b76a2d016" (UID: "bf3821f0-0379-4aba-af13-368b76a2d016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:11:00.486805 master-0 kubenswrapper[4430]: I1203 14:11:00.486735 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq" (OuterVolumeSpecName: "kube-api-access-r9wnq") pod "bf3821f0-0379-4aba-af13-368b76a2d016" (UID: "bf3821f0-0379-4aba-af13-368b76a2d016"). InnerVolumeSpecName "kube-api-access-r9wnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:11:00.487178 master-0 kubenswrapper[4430]: I1203 14:11:00.486853 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:11:00.495070 master-0 kubenswrapper[4430]: I1203 14:11:00.495005 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zhkp"] Dec 03 14:11:00.533048 master-0 kubenswrapper[4430]: I1203 14:11:00.532930 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf3821f0-0379-4aba-af13-368b76a2d016" (UID: "bf3821f0-0379-4aba-af13-368b76a2d016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:11:00.585240 master-0 kubenswrapper[4430]: I1203 14:11:00.585182 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9wnq\" (UniqueName: \"kubernetes.io/projected/bf3821f0-0379-4aba-af13-368b76a2d016-kube-api-access-r9wnq\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.585544 master-0 kubenswrapper[4430]: I1203 14:11:00.585329 4430 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:00.585644 master-0 kubenswrapper[4430]: I1203 14:11:00.585603 4430 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf3821f0-0379-4aba-af13-368b76a2d016-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:11:01.129539 master-0 kubenswrapper[4430]: I1203 14:11:01.129453 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qb87p" event={"ID":"bf3821f0-0379-4aba-af13-368b76a2d016","Type":"ContainerDied","Data":"6462d24e8b63365d23fe93b71e9023c7c68008d7cb26514661eba1c478c259f9"} Dec 03 14:11:01.129539 master-0 kubenswrapper[4430]: I1203 14:11:01.129540 4430 scope.go:117] "RemoveContainer" containerID="5e6960dfd595da93060e8eeb09440083b5535bc4215160643d69a14139540910" Dec 03 14:11:01.129962 master-0 kubenswrapper[4430]: I1203 14:11:01.129544 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qb87p" Dec 03 14:11:01.148320 master-0 kubenswrapper[4430]: I1203 14:11:01.148270 4430 scope.go:117] "RemoveContainer" containerID="ad1631adcf8f823c1ab7e5d7a17f72f93919bc9a378782381890948bf47fc7b2" Dec 03 14:11:01.178704 master-0 kubenswrapper[4430]: I1203 14:11:01.178662 4430 scope.go:117] "RemoveContainer" containerID="047c245ba1c2c62c5892ae6e9c30ff6cd2534604a523d21c5897fa5f879462c4" Dec 03 14:11:01.179677 master-0 kubenswrapper[4430]: I1203 14:11:01.179604 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:11:01.183497 master-0 kubenswrapper[4430]: I1203 14:11:01.183444 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qb87p"] Dec 03 14:11:01.593762 master-0 kubenswrapper[4430]: I1203 14:11:01.593688 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" path="/var/lib/kubelet/pods/5cd2fcaa-4163-46ea-9af8-631006e2b7ca/volumes" Dec 03 14:11:01.595034 master-0 kubenswrapper[4430]: I1203 14:11:01.594989 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" path="/var/lib/kubelet/pods/7d3676e4-94ef-4595-b106-1a354cceccea/volumes" Dec 03 14:11:01.596384 master-0 kubenswrapper[4430]: I1203 14:11:01.596341 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" path="/var/lib/kubelet/pods/92d55c3f-3624-43fe-b81f-28cdb0f2d449/volumes" Dec 03 14:11:01.599003 master-0 kubenswrapper[4430]: I1203 14:11:01.598922 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" path="/var/lib/kubelet/pods/bf3821f0-0379-4aba-af13-368b76a2d016/volumes" Dec 03 14:11:01.622532 master-0 kubenswrapper[4430]: I1203 14:11:01.622473 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:11:01.622730 master-0 kubenswrapper[4430]: I1203 14:11:01.622566 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:11:31.622262 master-0 kubenswrapper[4430]: I1203 14:11:31.622164 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:11:31.623900 master-0 kubenswrapper[4430]: I1203 14:11:31.622274 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:11:31.623900 master-0 kubenswrapper[4430]: I1203 14:11:31.622470 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:11:31.623900 master-0 kubenswrapper[4430]: I1203 14:11:31.623389 4430 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160"} pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:11:31.623900 master-0 kubenswrapper[4430]: I1203 14:11:31.623576 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" containerID="cri-o://231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160" gracePeriod=600 Dec 03 14:11:32.351988 master-0 kubenswrapper[4430]: I1203 14:11:32.351902 4430 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160" exitCode=0 Dec 03 14:11:32.351988 master-0 kubenswrapper[4430]: I1203 14:11:32.351979 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160"} Dec 03 14:11:32.352395 master-0 kubenswrapper[4430]: I1203 14:11:32.352052 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb"} Dec 03 14:12:33.019660 master-0 kubenswrapper[4430]: I1203 14:12:33.019516 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:12:33.021157 master-0 kubenswrapper[4430]: E1203 14:12:33.019955 4430 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:12:33.021157 master-0 kubenswrapper[4430]: E1203 14:12:33.020018 4430 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-4-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:12:33.021157 master-0 kubenswrapper[4430]: E1203 14:12:33.020141 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access podName:0b1e0884-ff54-419b-90d3-25f561a6391d nodeName:}" failed. No retries permitted until 2025-12-03 14:14:35.020106057 +0000 UTC m=+375.643020133 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access") pod "installer-4-master-0" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:13:19.557648 master-0 kubenswrapper[4430]: I1203 14:13:19.557579 4430 kubelet.go:1505] "Image garbage collection succeeded" Dec 03 14:13:22.973134 master-0 kubenswrapper[4430]: I1203 14:13:22.973067 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-58nng"] Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973462 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973497 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973525 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973534 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973547 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973554 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973562 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973569 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973582 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973589 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973602 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973610 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973621 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973629 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973640 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973647 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973658 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973665 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="extract-utilities" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973682 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973690 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973699 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973705 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="extract-content" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: E1203 14:13:22.973716 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973723 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973906 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd2fcaa-4163-46ea-9af8-631006e2b7ca" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973941 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3676e4-94ef-4595-b106-1a354cceccea" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973953 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf3821f0-0379-4aba-af13-368b76a2d016" containerName="registry-server" Dec 03 14:13:22.973948 master-0 kubenswrapper[4430]: I1203 14:13:22.973966 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d55c3f-3624-43fe-b81f-28cdb0f2d449" containerName="registry-server" Dec 03 14:13:22.975397 master-0 kubenswrapper[4430]: I1203 14:13:22.974672 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:22.977571 master-0 kubenswrapper[4430]: I1203 14:13:22.977510 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-jzxrj" Dec 03 14:13:22.977571 master-0 kubenswrapper[4430]: I1203 14:13:22.977553 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 03 14:13:23.090555 master-0 kubenswrapper[4430]: I1203 14:13:23.090482 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.090917 master-0 kubenswrapper[4430]: I1203 14:13:23.090575 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.090917 master-0 kubenswrapper[4430]: I1203 14:13:23.090647 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlkmj\" (UniqueName: \"kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.090917 master-0 kubenswrapper[4430]: I1203 14:13:23.090721 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.192692 master-0 kubenswrapper[4430]: I1203 14:13:23.192615 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.193293 master-0 kubenswrapper[4430]: I1203 14:13:23.192713 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.193293 master-0 kubenswrapper[4430]: I1203 14:13:23.192791 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlkmj\" (UniqueName: \"kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.193293 master-0 kubenswrapper[4430]: I1203 14:13:23.192879 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.193293 master-0 kubenswrapper[4430]: I1203 14:13:23.193084 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.193762 master-0 kubenswrapper[4430]: I1203 14:13:23.193734 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.322166 master-0 kubenswrapper[4430]: I1203 14:13:23.322068 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.327034 master-0 kubenswrapper[4430]: I1203 14:13:23.326992 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlkmj\" (UniqueName: \"kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj\") pod \"cni-sysctl-allowlist-ds-58nng\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:23.602754 master-0 kubenswrapper[4430]: I1203 14:13:23.602569 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:24.339830 master-0 kubenswrapper[4430]: I1203 14:13:24.339745 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" event={"ID":"678a2d10-b579-4035-97f2-915a0e43ea48","Type":"ContainerStarted","Data":"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de"} Dec 03 14:13:24.339830 master-0 kubenswrapper[4430]: I1203 14:13:24.339811 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" event={"ID":"678a2d10-b579-4035-97f2-915a0e43ea48","Type":"ContainerStarted","Data":"1c0ee3b7efb69152e956127a976ae69785e311a7fc12f5f0624375ad8e4a32d6"} Dec 03 14:13:24.340723 master-0 kubenswrapper[4430]: I1203 14:13:24.340671 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:24.364383 master-0 kubenswrapper[4430]: I1203 14:13:24.364270 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" podStartSLOduration=2.364234184 podStartE2EDuration="2.364234184s" podCreationTimestamp="2025-12-03 14:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:13:24.362111634 +0000 UTC m=+304.985025720" watchObservedRunningTime="2025-12-03 14:13:24.364234184 +0000 UTC m=+304.987148260" Dec 03 14:13:24.368910 master-0 kubenswrapper[4430]: I1203 14:13:24.368862 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:24.987677 master-0 kubenswrapper[4430]: I1203 14:13:24.987598 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-58nng"] Dec 03 14:13:26.353770 master-0 kubenswrapper[4430]: I1203 14:13:26.353619 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" gracePeriod=30 Dec 03 14:13:27.490717 master-0 kubenswrapper[4430]: I1203 14:13:27.490628 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-764cbf5554-kftwv"] Dec 03 14:13:27.492915 master-0 kubenswrapper[4430]: I1203 14:13:27.492875 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.496020 master-0 kubenswrapper[4430]: I1203 14:13:27.495981 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Dec 03 14:13:27.496375 master-0 kubenswrapper[4430]: I1203 14:13:27.496354 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Dec 03 14:13:27.496660 master-0 kubenswrapper[4430]: I1203 14:13:27.496640 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-fwsd5" Dec 03 14:13:27.496825 master-0 kubenswrapper[4430]: I1203 14:13:27.496661 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Dec 03 14:13:27.499921 master-0 kubenswrapper[4430]: I1203 14:13:27.499882 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Dec 03 14:13:27.500469 master-0 kubenswrapper[4430]: I1203 14:13:27.500445 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Dec 03 14:13:27.504085 master-0 kubenswrapper[4430]: I1203 14:13:27.504056 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" Dec 03 14:13:27.512977 master-0 kubenswrapper[4430]: I1203 14:13:27.512894 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-764cbf5554-kftwv"] Dec 03 14:13:27.660408 master-0 kubenswrapper[4430]: I1203 14:13:27.660345 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.660408 master-0 kubenswrapper[4430]: I1203 14:13:27.660412 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.661090 master-0 kubenswrapper[4430]: I1203 14:13:27.661054 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.661182 master-0 kubenswrapper[4430]: I1203 14:13:27.661137 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.661630 master-0 kubenswrapper[4430]: I1203 14:13:27.661576 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.661768 master-0 kubenswrapper[4430]: I1203 14:13:27.661744 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.661873 master-0 kubenswrapper[4430]: I1203 14:13:27.661802 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.662372 master-0 kubenswrapper[4430]: I1203 14:13:27.662330 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763463 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763524 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763552 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763571 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763589 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.763608 master-0 kubenswrapper[4430]: I1203 14:13:27.763606 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: I1203 14:13:27.763630 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: I1203 14:13:27.763658 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: E1203 14:13:27.764483 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: E1203 14:13:27.764636 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:28.264612735 +0000 UTC m=+308.887526811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: I1203 14:13:27.764832 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.765410 master-0 kubenswrapper[4430]: I1203 14:13:27.765188 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.766042 master-0 kubenswrapper[4430]: I1203 14:13:27.765985 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.767494 master-0 kubenswrapper[4430]: I1203 14:13:27.767210 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.768456 master-0 kubenswrapper[4430]: I1203 14:13:27.768361 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.769693 master-0 kubenswrapper[4430]: I1203 14:13:27.769211 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:27.781811 master-0 kubenswrapper[4430]: I1203 14:13:27.781757 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:28.271271 master-0 kubenswrapper[4430]: I1203 14:13:28.271154 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:28.271615 master-0 kubenswrapper[4430]: E1203 14:13:28.271318 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:28.271615 master-0 kubenswrapper[4430]: E1203 14:13:28.271403 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:29.271383729 +0000 UTC m=+309.894297805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:29.288586 master-0 kubenswrapper[4430]: I1203 14:13:29.288511 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:29.289316 master-0 kubenswrapper[4430]: E1203 14:13:29.288726 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:29.289316 master-0 kubenswrapper[4430]: E1203 14:13:29.288795 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:31.288776926 +0000 UTC m=+311.911691002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:31.337265 master-0 kubenswrapper[4430]: I1203 14:13:31.337166 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:31.338265 master-0 kubenswrapper[4430]: E1203 14:13:31.337440 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:31.338265 master-0 kubenswrapper[4430]: E1203 14:13:31.337559 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:35.337530983 +0000 UTC m=+315.960445059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:31.622000 master-0 kubenswrapper[4430]: I1203 14:13:31.621815 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:13:31.622000 master-0 kubenswrapper[4430]: I1203 14:13:31.621935 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:13:33.605296 master-0 kubenswrapper[4430]: E1203 14:13:33.605146 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:33.606742 master-0 kubenswrapper[4430]: E1203 14:13:33.606703 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:33.608441 master-0 kubenswrapper[4430]: E1203 14:13:33.608393 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:33.608568 master-0 kubenswrapper[4430]: E1203 14:13:33.608451 4430 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:33.636566 master-0 kubenswrapper[4430]: I1203 14:13:33.636495 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:13:33.637965 master-0 kubenswrapper[4430]: I1203 14:13:33.637939 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:33.679738 master-0 kubenswrapper[4430]: I1203 14:13:33.679645 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:33.681893 master-0 kubenswrapper[4430]: I1203 14:13:33.681835 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:33.726073 master-0 kubenswrapper[4430]: I1203 14:13:33.720391 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:13:33.784720 master-0 kubenswrapper[4430]: I1203 14:13:33.784616 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:33.784988 master-0 kubenswrapper[4430]: I1203 14:13:33.784756 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:33.789320 master-0 kubenswrapper[4430]: I1203 14:13:33.789248 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:34.114329 master-0 kubenswrapper[4430]: I1203 14:13:34.114258 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:34.263899 master-0 kubenswrapper[4430]: I1203 14:13:34.263812 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:13:34.668598 master-0 kubenswrapper[4430]: I1203 14:13:34.668432 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:13:34.674116 master-0 kubenswrapper[4430]: W1203 14:13:34.674072 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38888547_ed48_4f96_810d_bcd04e49bd6b.slice/crio-6638ec25cc0e1ee61f7590c2e2fcfef6f3d3bc9fe20f730aa7c0c3c876442299 WatchSource:0}: Error finding container 6638ec25cc0e1ee61f7590c2e2fcfef6f3d3bc9fe20f730aa7c0c3c876442299: Status 404 returned error can't find the container with id 6638ec25cc0e1ee61f7590c2e2fcfef6f3d3bc9fe20f730aa7c0c3c876442299 Dec 03 14:13:35.414217 master-0 kubenswrapper[4430]: I1203 14:13:35.414045 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:35.414217 master-0 kubenswrapper[4430]: E1203 14:13:35.414191 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:35.414688 master-0 kubenswrapper[4430]: E1203 14:13:35.414250 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:43.414233954 +0000 UTC m=+324.037148030 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:35.414853 master-0 kubenswrapper[4430]: I1203 14:13:35.414670 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb"} Dec 03 14:13:35.414853 master-0 kubenswrapper[4430]: I1203 14:13:35.414725 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1"} Dec 03 14:13:35.414853 master-0 kubenswrapper[4430]: I1203 14:13:35.414742 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"6638ec25cc0e1ee61f7590c2e2fcfef6f3d3bc9fe20f730aa7c0c3c876442299"} Dec 03 14:13:35.446454 master-0 kubenswrapper[4430]: I1203 14:13:35.444147 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podStartSLOduration=3.444124238 podStartE2EDuration="3.444124238s" podCreationTimestamp="2025-12-03 14:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:13:35.440345262 +0000 UTC m=+316.063259358" watchObservedRunningTime="2025-12-03 14:13:35.444124238 +0000 UTC m=+316.067038314" Dec 03 14:13:35.479500 master-0 kubenswrapper[4430]: I1203 14:13:35.479447 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 14:13:35.479814 master-0 kubenswrapper[4430]: I1203 14:13:35.479760 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="multus-admission-controller" containerID="cri-o://d91ef0ad78b6221abcedb3e08cbd0af37a4ae5c5da50c245215f454652d8185e" gracePeriod=30 Dec 03 14:13:35.480176 master-0 kubenswrapper[4430]: I1203 14:13:35.480151 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="kube-rbac-proxy" containerID="cri-o://cf68930d8c87e8957e8fdbba5d623639f91d1b1a3d9d121398a783e96e5e3961" gracePeriod=30 Dec 03 14:13:37.430380 master-0 kubenswrapper[4430]: I1203 14:13:37.430255 4430 generic.go:334] "Generic (PLEG): container finished" podID="22673f47-9484-4eed-bbce-888588c754ed" containerID="cf68930d8c87e8957e8fdbba5d623639f91d1b1a3d9d121398a783e96e5e3961" exitCode=0 Dec 03 14:13:37.430380 master-0 kubenswrapper[4430]: I1203 14:13:37.430328 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerDied","Data":"cf68930d8c87e8957e8fdbba5d623639f91d1b1a3d9d121398a783e96e5e3961"} Dec 03 14:13:43.475021 master-0 kubenswrapper[4430]: I1203 14:13:43.474916 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:43.476325 master-0 kubenswrapper[4430]: E1203 14:13:43.475159 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:43.476325 master-0 kubenswrapper[4430]: E1203 14:13:43.475230 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:59.475211009 +0000 UTC m=+340.098125085 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:43.605622 master-0 kubenswrapper[4430]: E1203 14:13:43.605454 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:43.608095 master-0 kubenswrapper[4430]: E1203 14:13:43.607991 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:43.612266 master-0 kubenswrapper[4430]: E1203 14:13:43.612209 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:43.612266 master-0 kubenswrapper[4430]: E1203 14:13:43.612255 4430 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:49.762624 master-0 kubenswrapper[4430]: I1203 14:13:49.761942 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c696657b7-452tx"] Dec 03 14:13:49.763767 master-0 kubenswrapper[4430]: I1203 14:13:49.763739 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:49.766303 master-0 kubenswrapper[4430]: I1203 14:13:49.766241 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 14:13:49.767243 master-0 kubenswrapper[4430]: I1203 14:13:49.767207 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-js47f" Dec 03 14:13:49.767558 master-0 kubenswrapper[4430]: I1203 14:13:49.767514 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 14:13:49.781478 master-0 kubenswrapper[4430]: I1203 14:13:49.781366 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c696657b7-452tx"] Dec 03 14:13:49.883321 master-0 kubenswrapper[4430]: I1203 14:13:49.883204 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:49.883729 master-0 kubenswrapper[4430]: I1203 14:13:49.883659 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:49.985322 master-0 kubenswrapper[4430]: I1203 14:13:49.985216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:49.985702 master-0 kubenswrapper[4430]: I1203 14:13:49.985438 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:49.985702 master-0 kubenswrapper[4430]: E1203 14:13:49.985608 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:13:49.985702 master-0 kubenswrapper[4430]: E1203 14:13:49.985685 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:50.485666508 +0000 UTC m=+331.108580584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:13:49.986497 master-0 kubenswrapper[4430]: I1203 14:13:49.986436 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:50.494411 master-0 kubenswrapper[4430]: I1203 14:13:50.494307 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:50.494775 master-0 kubenswrapper[4430]: E1203 14:13:50.494574 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:13:50.494775 master-0 kubenswrapper[4430]: E1203 14:13:50.494666 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:51.494641794 +0000 UTC m=+332.117555870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:13:51.511739 master-0 kubenswrapper[4430]: I1203 14:13:51.511652 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:51.512385 master-0 kubenswrapper[4430]: E1203 14:13:51.511824 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:13:51.512385 master-0 kubenswrapper[4430]: E1203 14:13:51.511903 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:53.511883509 +0000 UTC m=+334.134797585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:13:53.544352 master-0 kubenswrapper[4430]: I1203 14:13:53.544287 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:53.544892 master-0 kubenswrapper[4430]: E1203 14:13:53.544545 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:13:53.544892 master-0 kubenswrapper[4430]: E1203 14:13:53.544610 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:13:57.544589834 +0000 UTC m=+338.167503900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:13:53.604757 master-0 kubenswrapper[4430]: E1203 14:13:53.604638 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:53.606801 master-0 kubenswrapper[4430]: E1203 14:13:53.606725 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:53.608956 master-0 kubenswrapper[4430]: E1203 14:13:53.608862 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:13:53.609064 master-0 kubenswrapper[4430]: E1203 14:13:53.608990 4430 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:55.660606 master-0 kubenswrapper[4430]: I1203 14:13:55.660465 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:13:55.661802 master-0 kubenswrapper[4430]: I1203 14:13:55.660849 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" podUID="d3200abb-a440-44db-8897-79c809c1d838" containerName="controller-manager" containerID="cri-o://19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04" gracePeriod=30 Dec 03 14:13:55.718098 master-0 kubenswrapper[4430]: I1203 14:13:55.718032 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:13:55.718495 master-0 kubenswrapper[4430]: I1203 14:13:55.718406 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" containerID="cri-o://29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396" gracePeriod=30 Dec 03 14:13:56.168699 master-0 kubenswrapper[4430]: I1203 14:13:56.168640 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:13:56.278650 master-0 kubenswrapper[4430]: I1203 14:13:56.278586 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:13:56.301036 master-0 kubenswrapper[4430]: I1203 14:13:56.300941 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") pod \"d3200abb-a440-44db-8897-79c809c1d838\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " Dec 03 14:13:56.301036 master-0 kubenswrapper[4430]: I1203 14:13:56.300999 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") pod \"d3200abb-a440-44db-8897-79c809c1d838\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " Dec 03 14:13:56.301609 master-0 kubenswrapper[4430]: I1203 14:13:56.301115 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") pod \"d3200abb-a440-44db-8897-79c809c1d838\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " Dec 03 14:13:56.301609 master-0 kubenswrapper[4430]: I1203 14:13:56.301166 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") pod \"d3200abb-a440-44db-8897-79c809c1d838\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " Dec 03 14:13:56.301609 master-0 kubenswrapper[4430]: I1203 14:13:56.301324 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") pod \"d3200abb-a440-44db-8897-79c809c1d838\" (UID: \"d3200abb-a440-44db-8897-79c809c1d838\") " Dec 03 14:13:56.304209 master-0 kubenswrapper[4430]: I1203 14:13:56.304103 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d3200abb-a440-44db-8897-79c809c1d838" (UID: "d3200abb-a440-44db-8897-79c809c1d838"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.304633 master-0 kubenswrapper[4430]: I1203 14:13:56.304571 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3200abb-a440-44db-8897-79c809c1d838" (UID: "d3200abb-a440-44db-8897-79c809c1d838"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.304749 master-0 kubenswrapper[4430]: I1203 14:13:56.304610 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config" (OuterVolumeSpecName: "config") pod "d3200abb-a440-44db-8897-79c809c1d838" (UID: "d3200abb-a440-44db-8897-79c809c1d838"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.307337 master-0 kubenswrapper[4430]: I1203 14:13:56.307283 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8" (OuterVolumeSpecName: "kube-api-access-lxlb8") pod "d3200abb-a440-44db-8897-79c809c1d838" (UID: "d3200abb-a440-44db-8897-79c809c1d838"). InnerVolumeSpecName "kube-api-access-lxlb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:13:56.308640 master-0 kubenswrapper[4430]: I1203 14:13:56.308148 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3200abb-a440-44db-8897-79c809c1d838" (UID: "d3200abb-a440-44db-8897-79c809c1d838"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:13:56.402934 master-0 kubenswrapper[4430]: I1203 14:13:56.402872 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") pod \"1ba502ba-1179-478e-b4b9-f3409320b0ad\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " Dec 03 14:13:56.403111 master-0 kubenswrapper[4430]: I1203 14:13:56.402985 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") pod \"1ba502ba-1179-478e-b4b9-f3409320b0ad\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " Dec 03 14:13:56.403111 master-0 kubenswrapper[4430]: I1203 14:13:56.403033 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") pod \"1ba502ba-1179-478e-b4b9-f3409320b0ad\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " Dec 03 14:13:56.403294 master-0 kubenswrapper[4430]: I1203 14:13:56.403259 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") pod \"1ba502ba-1179-478e-b4b9-f3409320b0ad\" (UID: \"1ba502ba-1179-478e-b4b9-f3409320b0ad\") " Dec 03 14:13:56.403624 master-0 kubenswrapper[4430]: I1203 14:13:56.403569 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config" (OuterVolumeSpecName: "config") pod "1ba502ba-1179-478e-b4b9-f3409320b0ad" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.403836 master-0 kubenswrapper[4430]: I1203 14:13:56.403796 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxlb8\" (UniqueName: \"kubernetes.io/projected/d3200abb-a440-44db-8897-79c809c1d838-kube-api-access-lxlb8\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.403836 master-0 kubenswrapper[4430]: I1203 14:13:56.403833 4430 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.404018 master-0 kubenswrapper[4430]: I1203 14:13:56.403851 4430 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3200abb-a440-44db-8897-79c809c1d838-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.404018 master-0 kubenswrapper[4430]: I1203 14:13:56.403868 4430 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.404018 master-0 kubenswrapper[4430]: I1203 14:13:56.403882 4430 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.404018 master-0 kubenswrapper[4430]: I1203 14:13:56.403894 4430 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d3200abb-a440-44db-8897-79c809c1d838-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.404363 master-0 kubenswrapper[4430]: I1203 14:13:56.404106 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ba502ba-1179-478e-b4b9-f3409320b0ad" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.408260 master-0 kubenswrapper[4430]: I1203 14:13:56.408205 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz" (OuterVolumeSpecName: "kube-api-access-lq4dz") pod "1ba502ba-1179-478e-b4b9-f3409320b0ad" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad"). InnerVolumeSpecName "kube-api-access-lq4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:13:56.408534 master-0 kubenswrapper[4430]: I1203 14:13:56.408269 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ba502ba-1179-478e-b4b9-f3409320b0ad" (UID: "1ba502ba-1179-478e-b4b9-f3409320b0ad"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:13:56.464579 master-0 kubenswrapper[4430]: I1203 14:13:56.464512 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-58nng_678a2d10-b579-4035-97f2-915a0e43ea48/kube-multus-additional-cni-plugins/0.log" Dec 03 14:13:56.464832 master-0 kubenswrapper[4430]: I1203 14:13:56.464614 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:56.506436 master-0 kubenswrapper[4430]: I1203 14:13:56.506277 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq4dz\" (UniqueName: \"kubernetes.io/projected/1ba502ba-1179-478e-b4b9-f3409320b0ad-kube-api-access-lq4dz\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.506436 master-0 kubenswrapper[4430]: I1203 14:13:56.506330 4430 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba502ba-1179-478e-b4b9-f3409320b0ad-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.506436 master-0 kubenswrapper[4430]: I1203 14:13:56.506339 4430 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba502ba-1179-478e-b4b9-f3409320b0ad-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.574178 master-0 kubenswrapper[4430]: I1203 14:13:56.574104 4430 generic.go:334] "Generic (PLEG): container finished" podID="d3200abb-a440-44db-8897-79c809c1d838" containerID="19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04" exitCode=0 Dec 03 14:13:56.574482 master-0 kubenswrapper[4430]: I1203 14:13:56.574209 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerDied","Data":"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04"} Dec 03 14:13:56.574482 master-0 kubenswrapper[4430]: I1203 14:13:56.574246 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" event={"ID":"d3200abb-a440-44db-8897-79c809c1d838","Type":"ContainerDied","Data":"7bf70af3f09db2d96a85682f64e57f15fe5e480a5dd1cf6c25d74898602d8539"} Dec 03 14:13:56.574482 master-0 kubenswrapper[4430]: I1203 14:13:56.574279 4430 scope.go:117] "RemoveContainer" containerID="19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04" Dec 03 14:13:56.574482 master-0 kubenswrapper[4430]: I1203 14:13:56.574461 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78d987764b-xcs5w" Dec 03 14:13:56.577098 master-0 kubenswrapper[4430]: I1203 14:13:56.577064 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-58nng_678a2d10-b579-4035-97f2-915a0e43ea48/kube-multus-additional-cni-plugins/0.log" Dec 03 14:13:56.577175 master-0 kubenswrapper[4430]: I1203 14:13:56.577115 4430 generic.go:334] "Generic (PLEG): container finished" podID="678a2d10-b579-4035-97f2-915a0e43ea48" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" exitCode=137 Dec 03 14:13:56.577222 master-0 kubenswrapper[4430]: I1203 14:13:56.577172 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" event={"ID":"678a2d10-b579-4035-97f2-915a0e43ea48","Type":"ContainerDied","Data":"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de"} Dec 03 14:13:56.577222 master-0 kubenswrapper[4430]: I1203 14:13:56.577206 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" event={"ID":"678a2d10-b579-4035-97f2-915a0e43ea48","Type":"ContainerDied","Data":"1c0ee3b7efb69152e956127a976ae69785e311a7fc12f5f0624375ad8e4a32d6"} Dec 03 14:13:56.577338 master-0 kubenswrapper[4430]: I1203 14:13:56.577258 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-58nng" Dec 03 14:13:56.580113 master-0 kubenswrapper[4430]: I1203 14:13:56.580020 4430 generic.go:334] "Generic (PLEG): container finished" podID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerID="29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396" exitCode=0 Dec 03 14:13:56.580113 master-0 kubenswrapper[4430]: I1203 14:13:56.580060 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerDied","Data":"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396"} Dec 03 14:13:56.580113 master-0 kubenswrapper[4430]: I1203 14:13:56.580086 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" event={"ID":"1ba502ba-1179-478e-b4b9-f3409320b0ad","Type":"ContainerDied","Data":"12d69da055ffcd27322e3a8adecbefb36c38d1067abe73e04477a138ccba0aa7"} Dec 03 14:13:56.580113 master-0 kubenswrapper[4430]: I1203 14:13:56.580093 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv" Dec 03 14:13:56.593882 master-0 kubenswrapper[4430]: I1203 14:13:56.593824 4430 scope.go:117] "RemoveContainer" containerID="19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04" Dec 03 14:13:56.594315 master-0 kubenswrapper[4430]: E1203 14:13:56.594284 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04\": container with ID starting with 19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04 not found: ID does not exist" containerID="19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04" Dec 03 14:13:56.594372 master-0 kubenswrapper[4430]: I1203 14:13:56.594330 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04"} err="failed to get container status \"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04\": rpc error: code = NotFound desc = could not find container \"19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04\": container with ID starting with 19cd1be0a1fd107136c87f10930951b933947476c18e9119b4fdeaf6fc43da04 not found: ID does not exist" Dec 03 14:13:56.594372 master-0 kubenswrapper[4430]: I1203 14:13:56.594356 4430 scope.go:117] "RemoveContainer" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" Dec 03 14:13:56.607306 master-0 kubenswrapper[4430]: I1203 14:13:56.607245 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist\") pod \"678a2d10-b579-4035-97f2-915a0e43ea48\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " Dec 03 14:13:56.607575 master-0 kubenswrapper[4430]: I1203 14:13:56.607354 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir\") pod \"678a2d10-b579-4035-97f2-915a0e43ea48\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " Dec 03 14:13:56.607575 master-0 kubenswrapper[4430]: I1203 14:13:56.607392 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready\") pod \"678a2d10-b579-4035-97f2-915a0e43ea48\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " Dec 03 14:13:56.607732 master-0 kubenswrapper[4430]: I1203 14:13:56.607521 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlkmj\" (UniqueName: \"kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj\") pod \"678a2d10-b579-4035-97f2-915a0e43ea48\" (UID: \"678a2d10-b579-4035-97f2-915a0e43ea48\") " Dec 03 14:13:56.608369 master-0 kubenswrapper[4430]: I1203 14:13:56.608254 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "678a2d10-b579-4035-97f2-915a0e43ea48" (UID: "678a2d10-b579-4035-97f2-915a0e43ea48"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:13:56.608369 master-0 kubenswrapper[4430]: I1203 14:13:56.608302 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "678a2d10-b579-4035-97f2-915a0e43ea48" (UID: "678a2d10-b579-4035-97f2-915a0e43ea48"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:13:56.608545 master-0 kubenswrapper[4430]: I1203 14:13:56.608461 4430 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/678a2d10-b579-4035-97f2-915a0e43ea48-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.608545 master-0 kubenswrapper[4430]: I1203 14:13:56.608484 4430 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/678a2d10-b579-4035-97f2-915a0e43ea48-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.608734 master-0 kubenswrapper[4430]: I1203 14:13:56.608697 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready" (OuterVolumeSpecName: "ready") pod "678a2d10-b579-4035-97f2-915a0e43ea48" (UID: "678a2d10-b579-4035-97f2-915a0e43ea48"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:13:56.613940 master-0 kubenswrapper[4430]: I1203 14:13:56.613859 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj" (OuterVolumeSpecName: "kube-api-access-xlkmj") pod "678a2d10-b579-4035-97f2-915a0e43ea48" (UID: "678a2d10-b579-4035-97f2-915a0e43ea48"). InnerVolumeSpecName "kube-api-access-xlkmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:13:56.615392 master-0 kubenswrapper[4430]: I1203 14:13:56.614787 4430 scope.go:117] "RemoveContainer" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" Dec 03 14:13:56.615638 master-0 kubenswrapper[4430]: E1203 14:13:56.615594 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de\": container with ID starting with 55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de not found: ID does not exist" containerID="55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de" Dec 03 14:13:56.615751 master-0 kubenswrapper[4430]: I1203 14:13:56.615645 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de"} err="failed to get container status \"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de\": rpc error: code = NotFound desc = could not find container \"55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de\": container with ID starting with 55231e522bdabaa48c30aebbbedf814f09019874fd5911375d1c011923efb7de not found: ID does not exist" Dec 03 14:13:56.615751 master-0 kubenswrapper[4430]: I1203 14:13:56.615678 4430 scope.go:117] "RemoveContainer" containerID="29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396" Dec 03 14:13:56.624175 master-0 kubenswrapper[4430]: I1203 14:13:56.624136 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:13:56.630281 master-0 kubenswrapper[4430]: I1203 14:13:56.630198 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-678c7f799b-4b7nv"] Dec 03 14:13:56.636858 master-0 kubenswrapper[4430]: I1203 14:13:56.636801 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:13:56.641915 master-0 kubenswrapper[4430]: I1203 14:13:56.641872 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78d987764b-xcs5w"] Dec 03 14:13:56.643628 master-0 kubenswrapper[4430]: I1203 14:13:56.643601 4430 scope.go:117] "RemoveContainer" containerID="29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396" Dec 03 14:13:56.644609 master-0 kubenswrapper[4430]: E1203 14:13:56.644155 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396\": container with ID starting with 29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396 not found: ID does not exist" containerID="29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396" Dec 03 14:13:56.644609 master-0 kubenswrapper[4430]: I1203 14:13:56.644207 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396"} err="failed to get container status \"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396\": rpc error: code = NotFound desc = could not find container \"29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396\": container with ID starting with 29053444b96be204cfacc06f351623469ad90ff08fa41fd3de790b64c121c396 not found: ID does not exist" Dec 03 14:13:56.712207 master-0 kubenswrapper[4430]: I1203 14:13:56.711437 4430 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/678a2d10-b579-4035-97f2-915a0e43ea48-ready\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.712207 master-0 kubenswrapper[4430]: I1203 14:13:56.711471 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlkmj\" (UniqueName: \"kubernetes.io/projected/678a2d10-b579-4035-97f2-915a0e43ea48-kube-api-access-xlkmj\") on node \"master-0\" DevicePath \"\"" Dec 03 14:13:56.895318 master-0 kubenswrapper[4430]: I1203 14:13:56.895192 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm"] Dec 03 14:13:56.895604 master-0 kubenswrapper[4430]: E1203 14:13:56.895571 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" Dec 03 14:13:56.895604 master-0 kubenswrapper[4430]: I1203 14:13:56.895590 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" Dec 03 14:13:56.895753 master-0 kubenswrapper[4430]: E1203 14:13:56.895619 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:56.895753 master-0 kubenswrapper[4430]: I1203 14:13:56.895630 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:56.895753 master-0 kubenswrapper[4430]: E1203 14:13:56.895647 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3200abb-a440-44db-8897-79c809c1d838" containerName="controller-manager" Dec 03 14:13:56.895753 master-0 kubenswrapper[4430]: I1203 14:13:56.895656 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3200abb-a440-44db-8897-79c809c1d838" containerName="controller-manager" Dec 03 14:13:56.895890 master-0 kubenswrapper[4430]: I1203 14:13:56.895809 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3200abb-a440-44db-8897-79c809c1d838" containerName="controller-manager" Dec 03 14:13:56.895890 master-0 kubenswrapper[4430]: I1203 14:13:56.895838 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" containerName="route-controller-manager" Dec 03 14:13:56.895890 master-0 kubenswrapper[4430]: I1203 14:13:56.895849 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" containerName="kube-multus-additional-cni-plugins" Dec 03 14:13:56.896498 master-0 kubenswrapper[4430]: I1203 14:13:56.896445 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:56.900755 master-0 kubenswrapper[4430]: I1203 14:13:56.900709 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:13:56.900980 master-0 kubenswrapper[4430]: I1203 14:13:56.900832 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:13:56.900980 master-0 kubenswrapper[4430]: I1203 14:13:56.900925 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:13:56.901092 master-0 kubenswrapper[4430]: I1203 14:13:56.900988 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:13:56.901092 master-0 kubenswrapper[4430]: I1203 14:13:56.901073 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:13:56.901316 master-0 kubenswrapper[4430]: I1203 14:13:56.901099 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:13:56.905301 master-0 kubenswrapper[4430]: I1203 14:13:56.905218 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm"] Dec 03 14:13:56.907304 master-0 kubenswrapper[4430]: I1203 14:13:56.907264 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:13:56.938745 master-0 kubenswrapper[4430]: I1203 14:13:56.938652 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-58nng"] Dec 03 14:13:56.942351 master-0 kubenswrapper[4430]: I1203 14:13:56.942288 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-58nng"] Dec 03 14:13:57.017688 master-0 kubenswrapper[4430]: I1203 14:13:57.017537 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.017688 master-0 kubenswrapper[4430]: I1203 14:13:57.017606 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.018047 master-0 kubenswrapper[4430]: I1203 14:13:57.017694 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.018047 master-0 kubenswrapper[4430]: I1203 14:13:57.017782 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.018047 master-0 kubenswrapper[4430]: I1203 14:13:57.017807 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.119685 master-0 kubenswrapper[4430]: I1203 14:13:57.119403 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.119685 master-0 kubenswrapper[4430]: I1203 14:13:57.119490 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.119685 master-0 kubenswrapper[4430]: I1203 14:13:57.119514 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.119685 master-0 kubenswrapper[4430]: I1203 14:13:57.119557 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.119685 master-0 kubenswrapper[4430]: I1203 14:13:57.119580 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.120931 master-0 kubenswrapper[4430]: I1203 14:13:57.120866 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.121676 master-0 kubenswrapper[4430]: I1203 14:13:57.121571 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.121676 master-0 kubenswrapper[4430]: I1203 14:13:57.121640 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.126525 master-0 kubenswrapper[4430]: I1203 14:13:57.125811 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.138764 master-0 kubenswrapper[4430]: I1203 14:13:57.138691 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.237389 master-0 kubenswrapper[4430]: I1203 14:13:57.237312 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:57.595802 master-0 kubenswrapper[4430]: I1203 14:13:57.595498 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba502ba-1179-478e-b4b9-f3409320b0ad" path="/var/lib/kubelet/pods/1ba502ba-1179-478e-b4b9-f3409320b0ad/volumes" Dec 03 14:13:57.596331 master-0 kubenswrapper[4430]: I1203 14:13:57.596311 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678a2d10-b579-4035-97f2-915a0e43ea48" path="/var/lib/kubelet/pods/678a2d10-b579-4035-97f2-915a0e43ea48/volumes" Dec 03 14:13:57.597121 master-0 kubenswrapper[4430]: I1203 14:13:57.597094 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3200abb-a440-44db-8897-79c809c1d838" path="/var/lib/kubelet/pods/d3200abb-a440-44db-8897-79c809c1d838/volumes" Dec 03 14:13:57.634976 master-0 kubenswrapper[4430]: I1203 14:13:57.634894 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:13:57.635314 master-0 kubenswrapper[4430]: E1203 14:13:57.635245 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:13:57.635732 master-0 kubenswrapper[4430]: E1203 14:13:57.635693 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:14:05.635663782 +0000 UTC m=+346.258577858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:13:57.654832 master-0 kubenswrapper[4430]: I1203 14:13:57.654715 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm"] Dec 03 14:13:57.905513 master-0 kubenswrapper[4430]: I1203 14:13:57.905282 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz"] Dec 03 14:13:57.907492 master-0 kubenswrapper[4430]: I1203 14:13:57.907461 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:57.912676 master-0 kubenswrapper[4430]: I1203 14:13:57.912624 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:13:57.912676 master-0 kubenswrapper[4430]: I1203 14:13:57.912874 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:13:57.912676 master-0 kubenswrapper[4430]: I1203 14:13:57.913259 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:13:57.919108 master-0 kubenswrapper[4430]: I1203 14:13:57.916804 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:13:57.919108 master-0 kubenswrapper[4430]: I1203 14:13:57.917313 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:13:57.919108 master-0 kubenswrapper[4430]: I1203 14:13:57.918808 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:13:57.919732 master-0 kubenswrapper[4430]: I1203 14:13:57.919649 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz"] Dec 03 14:13:58.041182 master-0 kubenswrapper[4430]: I1203 14:13:58.041117 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.041522 master-0 kubenswrapper[4430]: I1203 14:13:58.041407 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.041590 master-0 kubenswrapper[4430]: I1203 14:13:58.041545 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.041832 master-0 kubenswrapper[4430]: I1203 14:13:58.041806 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.143098 master-0 kubenswrapper[4430]: I1203 14:13:58.143048 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.143388 master-0 kubenswrapper[4430]: I1203 14:13:58.143112 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.143388 master-0 kubenswrapper[4430]: I1203 14:13:58.143169 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.143388 master-0 kubenswrapper[4430]: I1203 14:13:58.143204 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.145083 master-0 kubenswrapper[4430]: I1203 14:13:58.144390 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.145083 master-0 kubenswrapper[4430]: I1203 14:13:58.144703 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.147479 master-0 kubenswrapper[4430]: I1203 14:13:58.147433 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.180149 master-0 kubenswrapper[4430]: I1203 14:13:58.180022 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.255598 master-0 kubenswrapper[4430]: I1203 14:13:58.255531 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:13:58.605445 master-0 kubenswrapper[4430]: I1203 14:13:58.605338 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerStarted","Data":"778a7e26cb5ad94e90459f8aaf6d567b6415a8e54be0dc590a3189cec8d79a9c"} Dec 03 14:13:58.605723 master-0 kubenswrapper[4430]: I1203 14:13:58.605458 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerStarted","Data":"d2a6a688611e8b9ecd529c9208a2ccaa18a49d86dc2e02401bdff2775889cb0c"} Dec 03 14:13:58.605849 master-0 kubenswrapper[4430]: I1203 14:13:58.605820 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:58.612097 master-0 kubenswrapper[4430]: I1203 14:13:58.611307 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:13:58.696106 master-0 kubenswrapper[4430]: I1203 14:13:58.696044 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz"] Dec 03 14:13:58.697766 master-0 kubenswrapper[4430]: I1203 14:13:58.697706 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podStartSLOduration=3.69768535 podStartE2EDuration="3.69768535s" podCreationTimestamp="2025-12-03 14:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:13:58.692093472 +0000 UTC m=+339.315007568" watchObservedRunningTime="2025-12-03 14:13:58.69768535 +0000 UTC m=+339.320599426" Dec 03 14:13:58.703252 master-0 kubenswrapper[4430]: W1203 14:13:58.703205 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a557d1_cdd9_47ff_afbd_a301e7f589a7.slice/crio-c6d497d480d2c4f1bca1404d17e66f2ffa0c123f7202edfe50562b0b9b2859e7 WatchSource:0}: Error finding container c6d497d480d2c4f1bca1404d17e66f2ffa0c123f7202edfe50562b0b9b2859e7: Status 404 returned error can't find the container with id c6d497d480d2c4f1bca1404d17e66f2ffa0c123f7202edfe50562b0b9b2859e7 Dec 03 14:13:59.572661 master-0 kubenswrapper[4430]: I1203 14:13:59.572586 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:13:59.573368 master-0 kubenswrapper[4430]: E1203 14:13:59.572852 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:13:59.573368 master-0 kubenswrapper[4430]: E1203 14:13:59.572961 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:14:31.572939035 +0000 UTC m=+372.195853111 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:13:59.617507 master-0 kubenswrapper[4430]: I1203 14:13:59.617392 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" event={"ID":"33a557d1-cdd9-47ff-afbd-a301e7f589a7","Type":"ContainerStarted","Data":"28ce7e4f4b7b809dea3317a3bd31405197eb646e5fff42923fa0594a737b24b6"} Dec 03 14:13:59.617507 master-0 kubenswrapper[4430]: I1203 14:13:59.617508 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" event={"ID":"33a557d1-cdd9-47ff-afbd-a301e7f589a7","Type":"ContainerStarted","Data":"c6d497d480d2c4f1bca1404d17e66f2ffa0c123f7202edfe50562b0b9b2859e7"} Dec 03 14:14:00.109922 master-0 kubenswrapper[4430]: I1203 14:14:00.109812 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podStartSLOduration=5.109783728 podStartE2EDuration="5.109783728s" podCreationTimestamp="2025-12-03 14:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:00.104481979 +0000 UTC m=+340.727396075" watchObservedRunningTime="2025-12-03 14:14:00.109783728 +0000 UTC m=+340.732697824" Dec 03 14:14:00.624771 master-0 kubenswrapper[4430]: I1203 14:14:00.624712 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:14:00.631310 master-0 kubenswrapper[4430]: I1203 14:14:00.631263 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:14:01.622273 master-0 kubenswrapper[4430]: I1203 14:14:01.622156 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:14:01.622273 master-0 kubenswrapper[4430]: I1203 14:14:01.622268 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:14:05.680224 master-0 kubenswrapper[4430]: I1203 14:14:05.680125 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:14:05.681130 master-0 kubenswrapper[4430]: E1203 14:14:05.680396 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:14:05.681130 master-0 kubenswrapper[4430]: E1203 14:14:05.680540 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:14:21.680507952 +0000 UTC m=+362.303422178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:14:06.669502 master-0 kubenswrapper[4430]: I1203 14:14:06.669467 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5bdcc987c4-x99xc_22673f47-9484-4eed-bbce-888588c754ed/multus-admission-controller/1.log" Dec 03 14:14:06.669826 master-0 kubenswrapper[4430]: I1203 14:14:06.669803 4430 generic.go:334] "Generic (PLEG): container finished" podID="22673f47-9484-4eed-bbce-888588c754ed" containerID="d91ef0ad78b6221abcedb3e08cbd0af37a4ae5c5da50c245215f454652d8185e" exitCode=137 Dec 03 14:14:06.669924 master-0 kubenswrapper[4430]: I1203 14:14:06.669895 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerDied","Data":"d91ef0ad78b6221abcedb3e08cbd0af37a4ae5c5da50c245215f454652d8185e"} Dec 03 14:14:07.966306 master-0 kubenswrapper[4430]: I1203 14:14:07.966236 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5bdcc987c4-x99xc_22673f47-9484-4eed-bbce-888588c754ed/multus-admission-controller/1.log" Dec 03 14:14:07.967185 master-0 kubenswrapper[4430]: I1203 14:14:07.966375 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:14:08.071742 master-0 kubenswrapper[4430]: I1203 14:14:08.069243 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") pod \"22673f47-9484-4eed-bbce-888588c754ed\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " Dec 03 14:14:08.071742 master-0 kubenswrapper[4430]: I1203 14:14:08.069335 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") pod \"22673f47-9484-4eed-bbce-888588c754ed\" (UID: \"22673f47-9484-4eed-bbce-888588c754ed\") " Dec 03 14:14:08.075261 master-0 kubenswrapper[4430]: I1203 14:14:08.075184 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "22673f47-9484-4eed-bbce-888588c754ed" (UID: "22673f47-9484-4eed-bbce-888588c754ed"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:14:08.075490 master-0 kubenswrapper[4430]: I1203 14:14:08.075198 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf" (OuterVolumeSpecName: "kube-api-access-9rtlf") pod "22673f47-9484-4eed-bbce-888588c754ed" (UID: "22673f47-9484-4eed-bbce-888588c754ed"). InnerVolumeSpecName "kube-api-access-9rtlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:14:08.366198 master-0 kubenswrapper[4430]: I1203 14:14:08.365087 4430 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/22673f47-9484-4eed-bbce-888588c754ed-webhook-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:08.366198 master-0 kubenswrapper[4430]: I1203 14:14:08.365131 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rtlf\" (UniqueName: \"kubernetes.io/projected/22673f47-9484-4eed-bbce-888588c754ed-kube-api-access-9rtlf\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:08.693621 master-0 kubenswrapper[4430]: I1203 14:14:08.693455 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5bdcc987c4-x99xc_22673f47-9484-4eed-bbce-888588c754ed/multus-admission-controller/1.log" Dec 03 14:14:08.693621 master-0 kubenswrapper[4430]: I1203 14:14:08.693564 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" event={"ID":"22673f47-9484-4eed-bbce-888588c754ed","Type":"ContainerDied","Data":"167491accbcc7761df1a93bd8d4d7ce925c43643ae8d917bb763188fd267db1b"} Dec 03 14:14:08.694005 master-0 kubenswrapper[4430]: I1203 14:14:08.693679 4430 scope.go:117] "RemoveContainer" containerID="cf68930d8c87e8957e8fdbba5d623639f91d1b1a3d9d121398a783e96e5e3961" Dec 03 14:14:08.694005 master-0 kubenswrapper[4430]: I1203 14:14:08.693740 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5bdcc987c4-x99xc" Dec 03 14:14:08.721362 master-0 kubenswrapper[4430]: I1203 14:14:08.721322 4430 scope.go:117] "RemoveContainer" containerID="d91ef0ad78b6221abcedb3e08cbd0af37a4ae5c5da50c245215f454652d8185e" Dec 03 14:14:08.747692 master-0 kubenswrapper[4430]: I1203 14:14:08.747536 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 14:14:08.756605 master-0 kubenswrapper[4430]: I1203 14:14:08.756530 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5bdcc987c4-x99xc"] Dec 03 14:14:09.315465 master-0 kubenswrapper[4430]: I1203 14:14:09.315340 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: E1203 14:14:09.315874 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="kube-rbac-proxy" Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: I1203 14:14:09.315915 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="kube-rbac-proxy" Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: E1203 14:14:09.315933 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="multus-admission-controller" Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: I1203 14:14:09.315941 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="multus-admission-controller" Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: I1203 14:14:09.316178 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="kube-rbac-proxy" Dec 03 14:14:09.316377 master-0 kubenswrapper[4430]: I1203 14:14:09.316201 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="22673f47-9484-4eed-bbce-888588c754ed" containerName="multus-admission-controller" Dec 03 14:14:09.317106 master-0 kubenswrapper[4430]: I1203 14:14:09.317070 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.336656 master-0 kubenswrapper[4430]: I1203 14:14:09.336559 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:14:09.385073 master-0 kubenswrapper[4430]: I1203 14:14:09.384971 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385073 master-0 kubenswrapper[4430]: I1203 14:14:09.385065 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385073 master-0 kubenswrapper[4430]: I1203 14:14:09.385088 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385659 master-0 kubenswrapper[4430]: I1203 14:14:09.385171 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385659 master-0 kubenswrapper[4430]: I1203 14:14:09.385341 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385659 master-0 kubenswrapper[4430]: I1203 14:14:09.385472 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2zs6\" (UniqueName: \"kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.385659 master-0 kubenswrapper[4430]: I1203 14:14:09.385510 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.486780 master-0 kubenswrapper[4430]: I1203 14:14:09.486680 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.486780 master-0 kubenswrapper[4430]: I1203 14:14:09.486760 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2zs6\" (UniqueName: \"kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.486780 master-0 kubenswrapper[4430]: I1203 14:14:09.486784 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.487357 master-0 kubenswrapper[4430]: I1203 14:14:09.486852 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.487357 master-0 kubenswrapper[4430]: I1203 14:14:09.486905 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.487357 master-0 kubenswrapper[4430]: I1203 14:14:09.486925 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.487357 master-0 kubenswrapper[4430]: I1203 14:14:09.486946 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.488592 master-0 kubenswrapper[4430]: I1203 14:14:09.488534 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.488948 master-0 kubenswrapper[4430]: I1203 14:14:09.488889 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.489052 master-0 kubenswrapper[4430]: I1203 14:14:09.488936 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.489444 master-0 kubenswrapper[4430]: I1203 14:14:09.489389 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.491086 master-0 kubenswrapper[4430]: I1203 14:14:09.491040 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.491654 master-0 kubenswrapper[4430]: I1203 14:14:09.491622 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.598675 master-0 kubenswrapper[4430]: I1203 14:14:09.598325 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22673f47-9484-4eed-bbce-888588c754ed" path="/var/lib/kubelet/pods/22673f47-9484-4eed-bbce-888588c754ed/volumes" Dec 03 14:14:09.772930 master-0 kubenswrapper[4430]: I1203 14:14:09.772844 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:09.776445 master-0 kubenswrapper[4430]: I1203 14:14:09.773987 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2zs6\" (UniqueName: \"kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6\") pod \"console-59fc685495-qcxmz\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:09.776445 master-0 kubenswrapper[4430]: I1203 14:14:09.774470 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.777008 master-0 kubenswrapper[4430]: I1203 14:14:09.776939 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-4xstp" Dec 03 14:14:09.777209 master-0 kubenswrapper[4430]: I1203 14:14:09.777171 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 14:14:09.793151 master-0 kubenswrapper[4430]: I1203 14:14:09.793067 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.793151 master-0 kubenswrapper[4430]: I1203 14:14:09.793153 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.793443 master-0 kubenswrapper[4430]: I1203 14:14:09.793208 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.895026 master-0 kubenswrapper[4430]: I1203 14:14:09.894858 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.895026 master-0 kubenswrapper[4430]: I1203 14:14:09.894930 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.895026 master-0 kubenswrapper[4430]: I1203 14:14:09.894987 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.895647 master-0 kubenswrapper[4430]: I1203 14:14:09.895609 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.895700 master-0 kubenswrapper[4430]: I1203 14:14:09.895668 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:09.896711 master-0 kubenswrapper[4430]: I1203 14:14:09.896596 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:09.951376 master-0 kubenswrapper[4430]: I1203 14:14:09.951297 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:10.387508 master-0 kubenswrapper[4430]: I1203 14:14:10.387184 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:10.441126 master-0 kubenswrapper[4430]: I1203 14:14:10.441038 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:10.589729 master-0 kubenswrapper[4430]: I1203 14:14:10.568613 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:14:10.606246 master-0 kubenswrapper[4430]: W1203 14:14:10.606187 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1566ad2b_b965_4ba3_8a8b_f93b39e732c8.slice/crio-4956cd007e6ecb51108f9e9a78c71dec3f8014c0541cfe11d2ebc9322d1d01de WatchSource:0}: Error finding container 4956cd007e6ecb51108f9e9a78c71dec3f8014c0541cfe11d2ebc9322d1d01de: Status 404 returned error can't find the container with id 4956cd007e6ecb51108f9e9a78c71dec3f8014c0541cfe11d2ebc9322d1d01de Dec 03 14:14:10.717538 master-0 kubenswrapper[4430]: I1203 14:14:10.717477 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59fc685495-qcxmz" event={"ID":"1566ad2b-b965-4ba3-8a8b-f93b39e732c8","Type":"ContainerStarted","Data":"4956cd007e6ecb51108f9e9a78c71dec3f8014c0541cfe11d2ebc9322d1d01de"} Dec 03 14:14:11.450695 master-0 kubenswrapper[4430]: I1203 14:14:11.447256 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:11.732846 master-0 kubenswrapper[4430]: I1203 14:14:11.732765 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59fc685495-qcxmz" event={"ID":"1566ad2b-b965-4ba3-8a8b-f93b39e732c8","Type":"ContainerStarted","Data":"a4e04e6c524ae142d3954a6e6c8326b5e0e2ba6787ee66a27feaa743b480ba37"} Dec 03 14:14:11.736639 master-0 kubenswrapper[4430]: I1203 14:14:11.736573 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc","Type":"ContainerStarted","Data":"70f097298f2922e9083450244c57c979dabce7ca374a8306eecebbd3db1233f4"} Dec 03 14:14:11.760990 master-0 kubenswrapper[4430]: I1203 14:14:11.759085 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59fc685495-qcxmz" podStartSLOduration=2.7590305170000002 podStartE2EDuration="2.759030517s" podCreationTimestamp="2025-12-03 14:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:11.756730382 +0000 UTC m=+352.379644478" watchObservedRunningTime="2025-12-03 14:14:11.759030517 +0000 UTC m=+352.381944593" Dec 03 14:14:12.744348 master-0 kubenswrapper[4430]: I1203 14:14:12.744257 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc","Type":"ContainerStarted","Data":"e2570fe65faeb3a26079115c3f16ca3645b95062599663d83af730233ef1a157"} Dec 03 14:14:12.896877 master-0 kubenswrapper[4430]: I1203 14:14:12.896762 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=3.8967349110000002 podStartE2EDuration="3.896734911s" podCreationTimestamp="2025-12-03 14:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:12.89596542 +0000 UTC m=+353.518879516" watchObservedRunningTime="2025-12-03 14:14:12.896734911 +0000 UTC m=+353.519648987" Dec 03 14:14:19.726060 master-0 kubenswrapper[4430]: E1203 14:14:19.725997 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed\": container with ID starting with 1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed not found: ID does not exist" containerID="1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed" Dec 03 14:14:19.726060 master-0 kubenswrapper[4430]: I1203 14:14:19.726056 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed" err="rpc error: code = NotFound desc = could not find container \"1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed\": container with ID starting with 1edf9984746b29eb15ecc40e6b36e7b1d1a79307420d65a5d79169d1229902ed not found: ID does not exist" Dec 03 14:14:19.726858 master-0 kubenswrapper[4430]: E1203 14:14:19.726694 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7\": container with ID starting with c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7 not found: ID does not exist" containerID="c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7" Dec 03 14:14:19.726858 master-0 kubenswrapper[4430]: I1203 14:14:19.726759 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7" err="rpc error: code = NotFound desc = could not find container \"c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7\": container with ID starting with c7807975a89aacce92be2f4525a81880581cffe16956ea29153249e23eaaa3e7 not found: ID does not exist" Dec 03 14:14:19.728589 master-0 kubenswrapper[4430]: E1203 14:14:19.728543 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b\": container with ID starting with 4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b not found: ID does not exist" containerID="4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b" Dec 03 14:14:19.728589 master-0 kubenswrapper[4430]: I1203 14:14:19.728581 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b" err="rpc error: code = NotFound desc = could not find container \"4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b\": container with ID starting with 4682586afd0eda575e78a9e3b049bc1df842dc5e37a1400e1a725999e085355b not found: ID does not exist" Dec 03 14:14:19.729013 master-0 kubenswrapper[4430]: E1203 14:14:19.728976 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c\": container with ID starting with 1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c not found: ID does not exist" containerID="1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c" Dec 03 14:14:19.729013 master-0 kubenswrapper[4430]: I1203 14:14:19.729000 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c" err="rpc error: code = NotFound desc = could not find container \"1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c\": container with ID starting with 1986c96a84bccd7ac79093902caa34c50a1731ffa782228bdb791c03357cb77c not found: ID does not exist" Dec 03 14:14:19.730634 master-0 kubenswrapper[4430]: E1203 14:14:19.730587 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b\": container with ID starting with aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b not found: ID does not exist" containerID="aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b" Dec 03 14:14:19.730634 master-0 kubenswrapper[4430]: I1203 14:14:19.730626 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b" err="rpc error: code = NotFound desc = could not find container \"aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b\": container with ID starting with aa7447640c5fa66d68820f9eab73651b859f1a4e98b6dae8acd11d30c0a6650b not found: ID does not exist" Dec 03 14:14:19.952024 master-0 kubenswrapper[4430]: I1203 14:14:19.951892 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:19.953254 master-0 kubenswrapper[4430]: I1203 14:14:19.953196 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:19.958658 master-0 kubenswrapper[4430]: I1203 14:14:19.958622 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:20.805639 master-0 kubenswrapper[4430]: I1203 14:14:20.805522 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:14:21.776488 master-0 kubenswrapper[4430]: I1203 14:14:21.776347 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:14:21.777240 master-0 kubenswrapper[4430]: E1203 14:14:21.777182 4430 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Dec 03 14:14:21.777761 master-0 kubenswrapper[4430]: E1203 14:14:21.777720 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:14:53.777683001 +0000 UTC m=+394.400597107 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : secret "networking-console-plugin-cert" not found Dec 03 14:14:22.234656 master-0 kubenswrapper[4430]: I1203 14:14:22.230229 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Dec 03 14:14:22.234656 master-0 kubenswrapper[4430]: I1203 14:14:22.231358 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.238855 master-0 kubenswrapper[4430]: I1203 14:14:22.238740 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:14:22.239003 master-0 kubenswrapper[4430]: I1203 14:14:22.238980 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:14:22.261241 master-0 kubenswrapper[4430]: I1203 14:14:22.261144 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:22.261635 master-0 kubenswrapper[4430]: I1203 14:14:22.261573 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" containerName="installer" containerID="cri-o://e2570fe65faeb3a26079115c3f16ca3645b95062599663d83af730233ef1a157" gracePeriod=30 Dec 03 14:14:22.265290 master-0 kubenswrapper[4430]: I1203 14:14:22.265227 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Dec 03 14:14:22.286777 master-0 kubenswrapper[4430]: I1203 14:14:22.286706 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.287041 master-0 kubenswrapper[4430]: I1203 14:14:22.286804 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.287041 master-0 kubenswrapper[4430]: I1203 14:14:22.286866 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.389467 master-0 kubenswrapper[4430]: I1203 14:14:22.389302 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.389815 master-0 kubenswrapper[4430]: I1203 14:14:22.389567 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.389815 master-0 kubenswrapper[4430]: I1203 14:14:22.389751 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.389815 master-0 kubenswrapper[4430]: I1203 14:14:22.389748 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.389927 master-0 kubenswrapper[4430]: I1203 14:14:22.389898 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.687719 master-0 kubenswrapper[4430]: I1203 14:14:22.687527 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access\") pod \"installer-5-master-0\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:22.690890 master-0 kubenswrapper[4430]: I1203 14:14:22.690683 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:14:22.862485 master-0 kubenswrapper[4430]: I1203 14:14:22.861509 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:14:23.715611 master-0 kubenswrapper[4430]: I1203 14:14:23.715526 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Dec 03 14:14:23.716645 master-0 kubenswrapper[4430]: I1203 14:14:23.716599 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.734577 master-0 kubenswrapper[4430]: I1203 14:14:23.734471 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Dec 03 14:14:23.753803 master-0 kubenswrapper[4430]: I1203 14:14:23.753117 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Dec 03 14:14:23.758776 master-0 kubenswrapper[4430]: W1203 14:14:23.758686 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5db1386_71f6_4b27_b686_5a3bb35659fa.slice/crio-ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd WatchSource:0}: Error finding container ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd: Status 404 returned error can't find the container with id ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd Dec 03 14:14:23.823887 master-0 kubenswrapper[4430]: I1203 14:14:23.822899 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.823887 master-0 kubenswrapper[4430]: I1203 14:14:23.822975 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.823887 master-0 kubenswrapper[4430]: I1203 14:14:23.823040 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.830623 master-0 kubenswrapper[4430]: I1203 14:14:23.830148 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"f5db1386-71f6-4b27-b686-5a3bb35659fa","Type":"ContainerStarted","Data":"ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd"} Dec 03 14:14:23.924394 master-0 kubenswrapper[4430]: I1203 14:14:23.924316 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.924394 master-0 kubenswrapper[4430]: I1203 14:14:23.924392 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.924955 master-0 kubenswrapper[4430]: I1203 14:14:23.924475 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.924955 master-0 kubenswrapper[4430]: I1203 14:14:23.924601 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.924955 master-0 kubenswrapper[4430]: I1203 14:14:23.924658 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:23.942292 master-0 kubenswrapper[4430]: I1203 14:14:23.942225 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access\") pod \"installer-3-master-0\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:24.098125 master-0 kubenswrapper[4430]: I1203 14:14:24.097995 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:14:24.512898 master-0 kubenswrapper[4430]: I1203 14:14:24.512827 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Dec 03 14:14:24.837265 master-0 kubenswrapper[4430]: I1203 14:14:24.837039 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8","Type":"ContainerStarted","Data":"a9fb0b8ef0c1ee9efc43619bd47da67e35b642a4613e53d76b6b1a73738e05e5"} Dec 03 14:14:24.838559 master-0 kubenswrapper[4430]: I1203 14:14:24.838485 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"f5db1386-71f6-4b27-b686-5a3bb35659fa","Type":"ContainerStarted","Data":"144d44b306e446dc1d788ddcb797e07db18c86fb1e0b811ea16c48a30c32fe36"} Dec 03 14:14:24.860262 master-0 kubenswrapper[4430]: I1203 14:14:24.860171 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.860149315 podStartE2EDuration="2.860149315s" podCreationTimestamp="2025-12-03 14:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:24.858138359 +0000 UTC m=+365.481052435" watchObservedRunningTime="2025-12-03 14:14:24.860149315 +0000 UTC m=+365.483063381" Dec 03 14:14:25.845349 master-0 kubenswrapper[4430]: I1203 14:14:25.845265 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8","Type":"ContainerStarted","Data":"f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9"} Dec 03 14:14:25.862599 master-0 kubenswrapper[4430]: I1203 14:14:25.862508 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.862484223 podStartE2EDuration="2.862484223s" podCreationTimestamp="2025-12-03 14:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:25.862202145 +0000 UTC m=+366.485116221" watchObservedRunningTime="2025-12-03 14:14:25.862484223 +0000 UTC m=+366.485398299" Dec 03 14:14:31.622296 master-0 kubenswrapper[4430]: I1203 14:14:31.622188 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:14:31.623124 master-0 kubenswrapper[4430]: I1203 14:14:31.622319 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:14:31.623124 master-0 kubenswrapper[4430]: I1203 14:14:31.622409 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:14:31.623703 master-0 kubenswrapper[4430]: I1203 14:14:31.623665 4430 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb"} pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:14:31.623804 master-0 kubenswrapper[4430]: I1203 14:14:31.623768 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" containerID="cri-o://6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb" gracePeriod=600 Dec 03 14:14:31.663094 master-0 kubenswrapper[4430]: I1203 14:14:31.662999 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:14:31.664075 master-0 kubenswrapper[4430]: E1203 14:14:31.664045 4430 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: secret "telemeter-client-tls" not found Dec 03 14:14:31.664134 master-0 kubenswrapper[4430]: E1203 14:14:31.664102 4430 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:15:35.664087902 +0000 UTC m=+436.287001978 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : secret "telemeter-client-tls" not found Dec 03 14:14:31.901532 master-0 kubenswrapper[4430]: I1203 14:14:31.901475 4430 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb" exitCode=0 Dec 03 14:14:31.901532 master-0 kubenswrapper[4430]: I1203 14:14:31.901532 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb"} Dec 03 14:14:31.901885 master-0 kubenswrapper[4430]: I1203 14:14:31.901573 4430 scope.go:117] "RemoveContainer" containerID="231d70dfcd4fcc8eb2b3fb42e727308845d827bcb58bdbab372a9e325bfc9160" Dec 03 14:14:32.913516 master-0 kubenswrapper[4430]: I1203 14:14:32.913410 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"6781555008cceeeafa864fdc3e6eecbe93001626b5e30a88f394829d975fd631"} Dec 03 14:14:35.023516 master-0 kubenswrapper[4430]: I1203 14:14:35.023451 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:14:35.027731 master-0 kubenswrapper[4430]: I1203 14:14:35.027668 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"installer-4-master-0\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:14:35.126049 master-0 kubenswrapper[4430]: I1203 14:14:35.125992 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") pod \"0b1e0884-ff54-419b-90d3-25f561a6391d\" (UID: \"0b1e0884-ff54-419b-90d3-25f561a6391d\") " Dec 03 14:14:35.132979 master-0 kubenswrapper[4430]: I1203 14:14:35.132884 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b1e0884-ff54-419b-90d3-25f561a6391d" (UID: "0b1e0884-ff54-419b-90d3-25f561a6391d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:14:35.228073 master-0 kubenswrapper[4430]: I1203 14:14:35.228020 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b1e0884-ff54-419b-90d3-25f561a6391d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:42.983824 master-0 kubenswrapper[4430]: I1203 14:14:42.983766 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_acc7e12a-1547-4a54-b220-d8bc7e0a0fcc/installer/0.log" Dec 03 14:14:42.983824 master-0 kubenswrapper[4430]: I1203 14:14:42.983825 4430 generic.go:334] "Generic (PLEG): container finished" podID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" containerID="e2570fe65faeb3a26079115c3f16ca3645b95062599663d83af730233ef1a157" exitCode=1 Dec 03 14:14:42.984791 master-0 kubenswrapper[4430]: I1203 14:14:42.983865 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc","Type":"ContainerDied","Data":"e2570fe65faeb3a26079115c3f16ca3645b95062599663d83af730233ef1a157"} Dec 03 14:14:43.528787 master-0 kubenswrapper[4430]: I1203 14:14:43.528676 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_acc7e12a-1547-4a54-b220-d8bc7e0a0fcc/installer/0.log" Dec 03 14:14:43.528982 master-0 kubenswrapper[4430]: I1203 14:14:43.528792 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:43.570909 master-0 kubenswrapper[4430]: I1203 14:14:43.570853 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access\") pod \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " Dec 03 14:14:43.571166 master-0 kubenswrapper[4430]: I1203 14:14:43.570945 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock\") pod \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " Dec 03 14:14:43.571166 master-0 kubenswrapper[4430]: I1203 14:14:43.570970 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir\") pod \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\" (UID: \"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc\") " Dec 03 14:14:43.571434 master-0 kubenswrapper[4430]: I1203 14:14:43.571393 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" (UID: "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:14:43.571652 master-0 kubenswrapper[4430]: I1203 14:14:43.571594 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock" (OuterVolumeSpecName: "var-lock") pod "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" (UID: "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:14:43.574384 master-0 kubenswrapper[4430]: I1203 14:14:43.574314 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" (UID: "acc7e12a-1547-4a54-b220-d8bc7e0a0fcc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:14:43.673346 master-0 kubenswrapper[4430]: I1203 14:14:43.673282 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:43.673346 master-0 kubenswrapper[4430]: I1203 14:14:43.673328 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:43.673346 master-0 kubenswrapper[4430]: I1203 14:14:43.673337 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:43.994687 master-0 kubenswrapper[4430]: I1203 14:14:43.994636 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_acc7e12a-1547-4a54-b220-d8bc7e0a0fcc/installer/0.log" Dec 03 14:14:43.995330 master-0 kubenswrapper[4430]: I1203 14:14:43.994712 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"acc7e12a-1547-4a54-b220-d8bc7e0a0fcc","Type":"ContainerDied","Data":"70f097298f2922e9083450244c57c979dabce7ca374a8306eecebbd3db1233f4"} Dec 03 14:14:43.995330 master-0 kubenswrapper[4430]: I1203 14:14:43.994764 4430 scope.go:117] "RemoveContainer" containerID="e2570fe65faeb3a26079115c3f16ca3645b95062599663d83af730233ef1a157" Dec 03 14:14:43.995330 master-0 kubenswrapper[4430]: I1203 14:14:43.994833 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Dec 03 14:14:44.030356 master-0 kubenswrapper[4430]: I1203 14:14:44.030267 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:44.038127 master-0 kubenswrapper[4430]: I1203 14:14:44.038056 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Dec 03 14:14:45.592779 master-0 kubenswrapper[4430]: I1203 14:14:45.592713 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" path="/var/lib/kubelet/pods/acc7e12a-1547-4a54-b220-d8bc7e0a0fcc/volumes" Dec 03 14:14:47.683274 master-0 kubenswrapper[4430]: I1203 14:14:47.683209 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Dec 03 14:14:47.683793 master-0 kubenswrapper[4430]: E1203 14:14:47.683626 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" containerName="installer" Dec 03 14:14:47.683793 master-0 kubenswrapper[4430]: I1203 14:14:47.683645 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" containerName="installer" Dec 03 14:14:47.683894 master-0 kubenswrapper[4430]: I1203 14:14:47.683828 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="acc7e12a-1547-4a54-b220-d8bc7e0a0fcc" containerName="installer" Dec 03 14:14:47.684556 master-0 kubenswrapper[4430]: I1203 14:14:47.684525 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.689579 master-0 kubenswrapper[4430]: I1203 14:14:47.689527 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Dec 03 14:14:47.698834 master-0 kubenswrapper[4430]: I1203 14:14:47.698749 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Dec 03 14:14:47.730105 master-0 kubenswrapper[4430]: I1203 14:14:47.729944 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-648d88c756-vswh8" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" containerID="cri-o://113fc17037ed6814061d8e6003d126d84ffb64ce5d368f93c8fa094292f35bc6" gracePeriod=15 Dec 03 14:14:47.742933 master-0 kubenswrapper[4430]: I1203 14:14:47.742861 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.743050 master-0 kubenswrapper[4430]: I1203 14:14:47.743012 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.743246 master-0 kubenswrapper[4430]: I1203 14:14:47.743215 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.846259 master-0 kubenswrapper[4430]: I1203 14:14:47.846170 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.846577 master-0 kubenswrapper[4430]: I1203 14:14:47.846366 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.846577 master-0 kubenswrapper[4430]: I1203 14:14:47.846511 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.846697 master-0 kubenswrapper[4430]: I1203 14:14:47.846639 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.846748 master-0 kubenswrapper[4430]: I1203 14:14:47.846709 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:47.867074 master-0 kubenswrapper[4430]: I1203 14:14:47.867009 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:48.016709 master-0 kubenswrapper[4430]: I1203 14:14:48.016641 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:14:48.033684 master-0 kubenswrapper[4430]: I1203 14:14:48.033620 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-648d88c756-vswh8_62f94ae7-6043-4761-a16b-e0f072b1364b/console/2.log" Dec 03 14:14:48.033684 master-0 kubenswrapper[4430]: I1203 14:14:48.033684 4430 generic.go:334] "Generic (PLEG): container finished" podID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerID="113fc17037ed6814061d8e6003d126d84ffb64ce5d368f93c8fa094292f35bc6" exitCode=2 Dec 03 14:14:48.034056 master-0 kubenswrapper[4430]: I1203 14:14:48.033726 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerDied","Data":"113fc17037ed6814061d8e6003d126d84ffb64ce5d368f93c8fa094292f35bc6"} Dec 03 14:14:48.196264 master-0 kubenswrapper[4430]: I1203 14:14:48.196202 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-648d88c756-vswh8_62f94ae7-6043-4761-a16b-e0f072b1364b/console/2.log" Dec 03 14:14:48.196559 master-0 kubenswrapper[4430]: I1203 14:14:48.196284 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.267840 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.267964 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.268028 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.268093 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.268127 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.268182 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.269173 master-0 kubenswrapper[4430]: I1203 14:14:48.268219 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") pod \"62f94ae7-6043-4761-a16b-e0f072b1364b\" (UID: \"62f94ae7-6043-4761-a16b-e0f072b1364b\") " Dec 03 14:14:48.271929 master-0 kubenswrapper[4430]: I1203 14:14:48.270125 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config" (OuterVolumeSpecName: "console-config") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:14:48.271929 master-0 kubenswrapper[4430]: I1203 14:14:48.270138 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca" (OuterVolumeSpecName: "service-ca") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:14:48.271929 master-0 kubenswrapper[4430]: I1203 14:14:48.270473 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:14:48.276068 master-0 kubenswrapper[4430]: I1203 14:14:48.274718 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:14:48.280507 master-0 kubenswrapper[4430]: I1203 14:14:48.279665 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:14:48.281321 master-0 kubenswrapper[4430]: I1203 14:14:48.281273 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9" (OuterVolumeSpecName: "kube-api-access-nddv9") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "kube-api-access-nddv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:14:48.282207 master-0 kubenswrapper[4430]: I1203 14:14:48.282140 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "62f94ae7-6043-4761-a16b-e0f072b1364b" (UID: "62f94ae7-6043-4761-a16b-e0f072b1364b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369805 4430 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369871 4430 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369882 4430 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-console-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369891 4430 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369900 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nddv9\" (UniqueName: \"kubernetes.io/projected/62f94ae7-6043-4761-a16b-e0f072b1364b-kube-api-access-nddv9\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369908 4430 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62f94ae7-6043-4761-a16b-e0f072b1364b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.369898 master-0 kubenswrapper[4430]: I1203 14:14:48.369917 4430 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62f94ae7-6043-4761-a16b-e0f072b1364b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:14:48.492524 master-0 kubenswrapper[4430]: I1203 14:14:48.486835 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Dec 03 14:14:49.042370 master-0 kubenswrapper[4430]: I1203 14:14:49.042296 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3165c60b-3cd2-4bda-8c55-aecf00bef18d","Type":"ContainerStarted","Data":"b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60"} Dec 03 14:14:49.042370 master-0 kubenswrapper[4430]: I1203 14:14:49.042364 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3165c60b-3cd2-4bda-8c55-aecf00bef18d","Type":"ContainerStarted","Data":"f1bc4d5009d61af02ec9ccbe405ff8349082a9b82cb2edd836029964295a19b2"} Dec 03 14:14:49.044368 master-0 kubenswrapper[4430]: I1203 14:14:49.044340 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-648d88c756-vswh8_62f94ae7-6043-4761-a16b-e0f072b1364b/console/2.log" Dec 03 14:14:49.044474 master-0 kubenswrapper[4430]: I1203 14:14:49.044397 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-648d88c756-vswh8" event={"ID":"62f94ae7-6043-4761-a16b-e0f072b1364b","Type":"ContainerDied","Data":"00217725de46a89eb8db92c2bd64b28b001a01546be542f6e8c5a39e356e0798"} Dec 03 14:14:49.044474 master-0 kubenswrapper[4430]: I1203 14:14:49.044445 4430 scope.go:117] "RemoveContainer" containerID="113fc17037ed6814061d8e6003d126d84ffb64ce5d368f93c8fa094292f35bc6" Dec 03 14:14:49.044571 master-0 kubenswrapper[4430]: I1203 14:14:49.044542 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-648d88c756-vswh8" Dec 03 14:14:49.068157 master-0 kubenswrapper[4430]: I1203 14:14:49.068073 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.068053742 podStartE2EDuration="2.068053742s" podCreationTimestamp="2025-12-03 14:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:14:49.062151147 +0000 UTC m=+389.685065213" watchObservedRunningTime="2025-12-03 14:14:49.068053742 +0000 UTC m=+389.690967818" Dec 03 14:14:49.089728 master-0 kubenswrapper[4430]: I1203 14:14:49.089647 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:14:49.095735 master-0 kubenswrapper[4430]: I1203 14:14:49.095691 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-648d88c756-vswh8"] Dec 03 14:14:49.595087 master-0 kubenswrapper[4430]: I1203 14:14:49.595025 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" path="/var/lib/kubelet/pods/62f94ae7-6043-4761-a16b-e0f072b1364b/volumes" Dec 03 14:14:53.865610 master-0 kubenswrapper[4430]: I1203 14:14:53.865389 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:14:53.869846 master-0 kubenswrapper[4430]: I1203 14:14:53.869784 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:14:53.998525 master-0 kubenswrapper[4430]: I1203 14:14:53.998460 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-js47f" Dec 03 14:14:54.006952 master-0 kubenswrapper[4430]: I1203 14:14:54.006886 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:14:54.686812 master-0 kubenswrapper[4430]: I1203 14:14:54.685339 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c696657b7-452tx"] Dec 03 14:14:54.686976 master-0 kubenswrapper[4430]: I1203 14:14:54.686923 4430 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:14:55.087707 master-0 kubenswrapper[4430]: I1203 14:14:55.087643 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" event={"ID":"b1b3ab29-77cf-48ac-8881-846c46bb9048","Type":"ContainerStarted","Data":"06ba8de7e8c8a5de13e68a2cf5a8566240a2bcb2bbf93e998626f1da83342f03"} Dec 03 14:14:59.119069 master-0 kubenswrapper[4430]: I1203 14:14:59.118981 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" event={"ID":"b1b3ab29-77cf-48ac-8881-846c46bb9048","Type":"ContainerStarted","Data":"c9517578dc034f0c98dd71c22869d7f9997507ac06ea22d00ae1520c380d0e69"} Dec 03 14:14:59.227585 master-0 kubenswrapper[4430]: I1203 14:14:59.227454 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podStartSLOduration=66.989737586 podStartE2EDuration="1m10.227412725s" podCreationTimestamp="2025-12-03 14:13:49 +0000 UTC" firstStartedPulling="2025-12-03 14:14:54.684610722 +0000 UTC m=+395.307524798" lastFinishedPulling="2025-12-03 14:14:57.922285861 +0000 UTC m=+398.545199937" observedRunningTime="2025-12-03 14:14:59.223152155 +0000 UTC m=+399.846066251" watchObservedRunningTime="2025-12-03 14:14:59.227412725 +0000 UTC m=+399.850326801" Dec 03 14:15:00.197343 master-0 kubenswrapper[4430]: I1203 14:15:00.197233 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv"] Dec 03 14:15:00.198054 master-0 kubenswrapper[4430]: E1203 14:15:00.197778 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" Dec 03 14:15:00.198054 master-0 kubenswrapper[4430]: I1203 14:15:00.197825 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" Dec 03 14:15:00.198137 master-0 kubenswrapper[4430]: I1203 14:15:00.198066 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f94ae7-6043-4761-a16b-e0f072b1364b" containerName="console" Dec 03 14:15:00.199118 master-0 kubenswrapper[4430]: I1203 14:15:00.199093 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.208023 master-0 kubenswrapper[4430]: I1203 14:15:00.207946 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv"] Dec 03 14:15:00.214906 master-0 kubenswrapper[4430]: I1203 14:15:00.214560 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:15:00.216441 master-0 kubenswrapper[4430]: I1203 14:15:00.214875 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-qldhm" Dec 03 14:15:00.389965 master-0 kubenswrapper[4430]: I1203 14:15:00.389907 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnlmx\" (UniqueName: \"kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.390273 master-0 kubenswrapper[4430]: I1203 14:15:00.390055 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.390273 master-0 kubenswrapper[4430]: I1203 14:15:00.390087 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.491784 master-0 kubenswrapper[4430]: I1203 14:15:00.491725 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnlmx\" (UniqueName: \"kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.492141 master-0 kubenswrapper[4430]: I1203 14:15:00.491838 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.492141 master-0 kubenswrapper[4430]: I1203 14:15:00.491861 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.493046 master-0 kubenswrapper[4430]: I1203 14:15:00.493013 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.495590 master-0 kubenswrapper[4430]: I1203 14:15:00.495406 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.510477 master-0 kubenswrapper[4430]: I1203 14:15:00.510382 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnlmx\" (UniqueName: \"kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx\") pod \"collect-profiles-29412855-jmbvv\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.538642 master-0 kubenswrapper[4430]: I1203 14:15:00.538539 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:00.979535 master-0 kubenswrapper[4430]: I1203 14:15:00.979463 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv"] Dec 03 14:15:01.138179 master-0 kubenswrapper[4430]: I1203 14:15:01.138103 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" event={"ID":"1933a6a0-6ccc-4629-a0e5-a5a4b4575771","Type":"ContainerStarted","Data":"7fff7ce662fa845cdfc700cd037a460bee9b0c53d221ffce6c558ee21f47bc45"} Dec 03 14:15:02.147133 master-0 kubenswrapper[4430]: I1203 14:15:02.147073 4430 generic.go:334] "Generic (PLEG): container finished" podID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" containerID="a7621eb80f075477c436bb3e2bcabd6d1bfc0df73288588bab7c2b67afe81e35" exitCode=0 Dec 03 14:15:02.147133 master-0 kubenswrapper[4430]: I1203 14:15:02.147134 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" event={"ID":"1933a6a0-6ccc-4629-a0e5-a5a4b4575771","Type":"ContainerDied","Data":"a7621eb80f075477c436bb3e2bcabd6d1bfc0df73288588bab7c2b67afe81e35"} Dec 03 14:15:02.785101 master-0 kubenswrapper[4430]: I1203 14:15:02.785048 4430 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:15:02.786683 master-0 kubenswrapper[4430]: I1203 14:15:02.786659 4430 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:15:02.786975 master-0 kubenswrapper[4430]: I1203 14:15:02.786860 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.787462 master-0 kubenswrapper[4430]: I1203 14:15:02.787374 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" containerID="cri-o://1676af95112121a9e343fac781d61b54d4f18bb5d03944dc4409d844ba4c9c5e" gracePeriod=15 Dec 03 14:15:02.787636 master-0 kubenswrapper[4430]: I1203 14:15:02.787546 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-syncer" containerID="cri-o://353ef5bad57ce46db98c0549f921ee8f0ee580567553f3ba9950d113638096f2" gracePeriod=15 Dec 03 14:15:02.787737 master-0 kubenswrapper[4430]: I1203 14:15:02.787468 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e4e74143105a836ab029b335e356e20dcf63f1dfd4df0559287d53a803dfe9b1" gracePeriod=15 Dec 03 14:15:02.787798 master-0 kubenswrapper[4430]: I1203 14:15:02.787496 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://aa440bd50b25afd3bbdcd911eb6ddd2cb8d5f29270fc9664a389f142c4f8cf24" gracePeriod=15 Dec 03 14:15:02.787866 master-0 kubenswrapper[4430]: I1203 14:15:02.787381 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-check-endpoints" containerID="cri-o://749d4a97321672e94f0f4d6c55d7fa485dfbd3bbe5480f2c579faa82f311605b" gracePeriod=15 Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.788414 4430 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.788896 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-insecure-readyz" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.788914 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-insecure-readyz" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.788926 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.788935 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.788951 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="setup" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.788960 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="setup" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.788971 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-syncer" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.788979 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-syncer" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.788993 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.789002 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: E1203 14:15:02.789018 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-check-endpoints" Dec 03 14:15:02.789029 master-0 kubenswrapper[4430]: I1203 14:15:02.789026 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-check-endpoints" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789187 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-syncer" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789208 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-insecure-readyz" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789220 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-check-endpoints" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789236 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="setup" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789251 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:15:02.790187 master-0 kubenswrapper[4430]: I1203 14:15:02.789264 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5aa2d6b41f5e21a89224256dc48af14" containerName="kube-apiserver" Dec 03 14:15:02.840925 master-0 kubenswrapper[4430]: I1203 14:15:02.839188 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:15:02.933962 master-0 kubenswrapper[4430]: I1203 14:15:02.933177 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.933973 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934027 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934144 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934232 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934485 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934558 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:02.935575 master-0 kubenswrapper[4430]: I1203 14:15:02.934596 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.035650 master-0 kubenswrapper[4430]: I1203 14:15:03.035552 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.035650 master-0 kubenswrapper[4430]: I1203 14:15:03.035605 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035674 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035696 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035729 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035722 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035800 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.035809 master-0 kubenswrapper[4430]: I1203 14:15:03.035763 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.036017 master-0 kubenswrapper[4430]: I1203 14:15:03.035842 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.036017 master-0 kubenswrapper[4430]: I1203 14:15:03.035850 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.036017 master-0 kubenswrapper[4430]: I1203 14:15:03.035871 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.036017 master-0 kubenswrapper[4430]: I1203 14:15:03.035899 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.036147 master-0 kubenswrapper[4430]: I1203 14:15:03.036016 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.036194 master-0 kubenswrapper[4430]: I1203 14:15:03.036162 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.036361 master-0 kubenswrapper[4430]: I1203 14:15:03.036328 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:03.036405 master-0 kubenswrapper[4430]: I1203 14:15:03.036375 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.133558 master-0 kubenswrapper[4430]: I1203 14:15:03.133457 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:15:03.154353 master-0 kubenswrapper[4430]: W1203 14:15:03.154297 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf08d11be0e2919664ff2ea4b2440d0e0.slice/crio-8bfa8e02a9bd3c08293a934da9e2513894e9fcf7babe880424d307aadf87987b WatchSource:0}: Error finding container 8bfa8e02a9bd3c08293a934da9e2513894e9fcf7babe880424d307aadf87987b: Status 404 returned error can't find the container with id 8bfa8e02a9bd3c08293a934da9e2513894e9fcf7babe880424d307aadf87987b Dec 03 14:15:03.156859 master-0 kubenswrapper[4430]: I1203 14:15:03.156824 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_f5aa2d6b41f5e21a89224256dc48af14/kube-apiserver-cert-syncer/0.log" Dec 03 14:15:03.158067 master-0 kubenswrapper[4430]: I1203 14:15:03.158033 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="749d4a97321672e94f0f4d6c55d7fa485dfbd3bbe5480f2c579faa82f311605b" exitCode=0 Dec 03 14:15:03.158067 master-0 kubenswrapper[4430]: I1203 14:15:03.158059 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="aa440bd50b25afd3bbdcd911eb6ddd2cb8d5f29270fc9664a389f142c4f8cf24" exitCode=0 Dec 03 14:15:03.158067 master-0 kubenswrapper[4430]: I1203 14:15:03.158068 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="e4e74143105a836ab029b335e356e20dcf63f1dfd4df0559287d53a803dfe9b1" exitCode=0 Dec 03 14:15:03.158199 master-0 kubenswrapper[4430]: I1203 14:15:03.158076 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="353ef5bad57ce46db98c0549f921ee8f0ee580567553f3ba9950d113638096f2" exitCode=2 Dec 03 14:15:03.158199 master-0 kubenswrapper[4430]: E1203 14:15:03.157990 4430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.187dba266a74080d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:f08d11be0e2919664ff2ea4b2440d0e0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:15:03.156615181 +0000 UTC m=+403.779529257,LastTimestamp:2025-12-03 14:15:03.156615181 +0000 UTC m=+403.779529257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:15:03.160040 master-0 kubenswrapper[4430]: I1203 14:15:03.159816 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5db1386-71f6-4b27-b686-5a3bb35659fa" containerID="144d44b306e446dc1d788ddcb797e07db18c86fb1e0b811ea16c48a30c32fe36" exitCode=0 Dec 03 14:15:03.160040 master-0 kubenswrapper[4430]: I1203 14:15:03.159903 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"f5db1386-71f6-4b27-b686-5a3bb35659fa","Type":"ContainerDied","Data":"144d44b306e446dc1d788ddcb797e07db18c86fb1e0b811ea16c48a30c32fe36"} Dec 03 14:15:03.160937 master-0 kubenswrapper[4430]: I1203 14:15:03.160890 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5aa2d6b41f5e21a89224256dc48af14" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.161654 master-0 kubenswrapper[4430]: I1203 14:15:03.161382 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.161986 master-0 kubenswrapper[4430]: I1203 14:15:03.161952 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.426157 master-0 kubenswrapper[4430]: I1203 14:15:03.426096 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:03.427427 master-0 kubenswrapper[4430]: I1203 14:15:03.427374 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.427881 master-0 kubenswrapper[4430]: I1203 14:15:03.427849 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.428535 master-0 kubenswrapper[4430]: I1203 14:15:03.428469 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5aa2d6b41f5e21a89224256dc48af14" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.429206 master-0 kubenswrapper[4430]: I1203 14:15:03.429165 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:03.450937 master-0 kubenswrapper[4430]: I1203 14:15:03.450874 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume\") pod \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " Dec 03 14:15:03.451237 master-0 kubenswrapper[4430]: I1203 14:15:03.451083 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnlmx\" (UniqueName: \"kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx\") pod \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " Dec 03 14:15:03.451237 master-0 kubenswrapper[4430]: I1203 14:15:03.451164 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume\") pod \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\" (UID: \"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\") " Dec 03 14:15:03.451950 master-0 kubenswrapper[4430]: I1203 14:15:03.451898 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume" (OuterVolumeSpecName: "config-volume") pod "1933a6a0-6ccc-4629-a0e5-a5a4b4575771" (UID: "1933a6a0-6ccc-4629-a0e5-a5a4b4575771"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:15:03.454407 master-0 kubenswrapper[4430]: I1203 14:15:03.454377 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1933a6a0-6ccc-4629-a0e5-a5a4b4575771" (UID: "1933a6a0-6ccc-4629-a0e5-a5a4b4575771"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:15:03.454639 master-0 kubenswrapper[4430]: I1203 14:15:03.454552 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx" (OuterVolumeSpecName: "kube-api-access-dnlmx") pod "1933a6a0-6ccc-4629-a0e5-a5a4b4575771" (UID: "1933a6a0-6ccc-4629-a0e5-a5a4b4575771"). InnerVolumeSpecName "kube-api-access-dnlmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:15:03.558772 master-0 kubenswrapper[4430]: I1203 14:15:03.554964 4430 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-secret-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:03.558772 master-0 kubenswrapper[4430]: I1203 14:15:03.555023 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnlmx\" (UniqueName: \"kubernetes.io/projected/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-kube-api-access-dnlmx\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:03.558772 master-0 kubenswrapper[4430]: I1203 14:15:03.555041 4430 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1933a6a0-6ccc-4629-a0e5-a5a4b4575771-config-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:04.177482 master-0 kubenswrapper[4430]: I1203 14:15:04.177304 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f08d11be0e2919664ff2ea4b2440d0e0","Type":"ContainerStarted","Data":"9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde"} Dec 03 14:15:04.177482 master-0 kubenswrapper[4430]: I1203 14:15:04.177354 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f08d11be0e2919664ff2ea4b2440d0e0","Type":"ContainerStarted","Data":"8bfa8e02a9bd3c08293a934da9e2513894e9fcf7babe880424d307aadf87987b"} Dec 03 14:15:04.178882 master-0 kubenswrapper[4430]: I1203 14:15:04.178837 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.179426 master-0 kubenswrapper[4430]: I1203 14:15:04.179380 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.180090 master-0 kubenswrapper[4430]: I1203 14:15:04.180037 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.180849 master-0 kubenswrapper[4430]: I1203 14:15:04.180818 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" event={"ID":"1933a6a0-6ccc-4629-a0e5-a5a4b4575771","Type":"ContainerDied","Data":"7fff7ce662fa845cdfc700cd037a460bee9b0c53d221ffce6c558ee21f47bc45"} Dec 03 14:15:04.180892 master-0 kubenswrapper[4430]: I1203 14:15:04.180853 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:15:04.180929 master-0 kubenswrapper[4430]: I1203 14:15:04.180896 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fff7ce662fa845cdfc700cd037a460bee9b0c53d221ffce6c558ee21f47bc45" Dec 03 14:15:04.185378 master-0 kubenswrapper[4430]: I1203 14:15:04.185305 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.185946 master-0 kubenswrapper[4430]: I1203 14:15:04.185888 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.186495 master-0 kubenswrapper[4430]: I1203 14:15:04.186443 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.567136 master-0 kubenswrapper[4430]: I1203 14:15:04.567077 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:15:04.568042 master-0 kubenswrapper[4430]: I1203 14:15:04.567994 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.568602 master-0 kubenswrapper[4430]: I1203 14:15:04.568538 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.569106 master-0 kubenswrapper[4430]: I1203 14:15:04.569074 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:04.672786 master-0 kubenswrapper[4430]: I1203 14:15:04.672716 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir\") pod \"f5db1386-71f6-4b27-b686-5a3bb35659fa\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " Dec 03 14:15:04.673017 master-0 kubenswrapper[4430]: I1203 14:15:04.672831 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5db1386-71f6-4b27-b686-5a3bb35659fa" (UID: "f5db1386-71f6-4b27-b686-5a3bb35659fa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:04.673017 master-0 kubenswrapper[4430]: I1203 14:15:04.672870 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access\") pod \"f5db1386-71f6-4b27-b686-5a3bb35659fa\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " Dec 03 14:15:04.673017 master-0 kubenswrapper[4430]: I1203 14:15:04.672904 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock\") pod \"f5db1386-71f6-4b27-b686-5a3bb35659fa\" (UID: \"f5db1386-71f6-4b27-b686-5a3bb35659fa\") " Dec 03 14:15:04.673162 master-0 kubenswrapper[4430]: I1203 14:15:04.673051 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock" (OuterVolumeSpecName: "var-lock") pod "f5db1386-71f6-4b27-b686-5a3bb35659fa" (UID: "f5db1386-71f6-4b27-b686-5a3bb35659fa"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:04.673279 master-0 kubenswrapper[4430]: I1203 14:15:04.673253 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:04.673279 master-0 kubenswrapper[4430]: I1203 14:15:04.673276 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5db1386-71f6-4b27-b686-5a3bb35659fa-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:04.675638 master-0 kubenswrapper[4430]: I1203 14:15:04.675601 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5db1386-71f6-4b27-b686-5a3bb35659fa" (UID: "f5db1386-71f6-4b27-b686-5a3bb35659fa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:15:04.774378 master-0 kubenswrapper[4430]: I1203 14:15:04.774140 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5db1386-71f6-4b27-b686-5a3bb35659fa-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:05.222753 master-0 kubenswrapper[4430]: I1203 14:15:05.222658 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"f5db1386-71f6-4b27-b686-5a3bb35659fa","Type":"ContainerDied","Data":"ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd"} Dec 03 14:15:05.223309 master-0 kubenswrapper[4430]: I1203 14:15:05.222775 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed363fca03249d187cc7c56171c3bdc62dd970f41f46c4cece031cef1470eebd" Dec 03 14:15:05.223309 master-0 kubenswrapper[4430]: I1203 14:15:05.223005 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:15:05.246561 master-0 kubenswrapper[4430]: I1203 14:15:05.246103 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:05.247393 master-0 kubenswrapper[4430]: I1203 14:15:05.247116 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:05.248390 master-0 kubenswrapper[4430]: I1203 14:15:05.248288 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:06.236831 master-0 kubenswrapper[4430]: I1203 14:15:06.236733 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_f5aa2d6b41f5e21a89224256dc48af14/kube-apiserver-cert-syncer/0.log" Dec 03 14:15:06.238158 master-0 kubenswrapper[4430]: I1203 14:15:06.238103 4430 generic.go:334] "Generic (PLEG): container finished" podID="f5aa2d6b41f5e21a89224256dc48af14" containerID="1676af95112121a9e343fac781d61b54d4f18bb5d03944dc4409d844ba4c9c5e" exitCode=0 Dec 03 14:15:07.001973 master-0 kubenswrapper[4430]: I1203 14:15:07.001923 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_f5aa2d6b41f5e21a89224256dc48af14/kube-apiserver-cert-syncer/0.log" Dec 03 14:15:07.003048 master-0 kubenswrapper[4430]: I1203 14:15:07.002999 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:07.004332 master-0 kubenswrapper[4430]: I1203 14:15:07.004255 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.005011 master-0 kubenswrapper[4430]: I1203 14:15:07.004947 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5aa2d6b41f5e21a89224256dc48af14" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.005575 master-0 kubenswrapper[4430]: I1203 14:15:07.005526 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.006055 master-0 kubenswrapper[4430]: I1203 14:15:07.006013 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.017620 master-0 kubenswrapper[4430]: I1203 14:15:07.017355 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") pod \"f5aa2d6b41f5e21a89224256dc48af14\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " Dec 03 14:15:07.017921 master-0 kubenswrapper[4430]: I1203 14:15:07.017531 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f5aa2d6b41f5e21a89224256dc48af14" (UID: "f5aa2d6b41f5e21a89224256dc48af14"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:07.018124 master-0 kubenswrapper[4430]: I1203 14:15:07.018091 4430 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-audit-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:07.119213 master-0 kubenswrapper[4430]: I1203 14:15:07.119127 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") pod \"f5aa2d6b41f5e21a89224256dc48af14\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " Dec 03 14:15:07.119559 master-0 kubenswrapper[4430]: I1203 14:15:07.119291 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f5aa2d6b41f5e21a89224256dc48af14" (UID: "f5aa2d6b41f5e21a89224256dc48af14"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:07.119559 master-0 kubenswrapper[4430]: I1203 14:15:07.119457 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") pod \"f5aa2d6b41f5e21a89224256dc48af14\" (UID: \"f5aa2d6b41f5e21a89224256dc48af14\") " Dec 03 14:15:07.119653 master-0 kubenswrapper[4430]: I1203 14:15:07.119561 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f5aa2d6b41f5e21a89224256dc48af14" (UID: "f5aa2d6b41f5e21a89224256dc48af14"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:07.120012 master-0 kubenswrapper[4430]: I1203 14:15:07.119975 4430 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:07.120012 master-0 kubenswrapper[4430]: I1203 14:15:07.120005 4430 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f5aa2d6b41f5e21a89224256dc48af14-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:07.251548 master-0 kubenswrapper[4430]: I1203 14:15:07.251495 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_f5aa2d6b41f5e21a89224256dc48af14/kube-apiserver-cert-syncer/0.log" Dec 03 14:15:07.252891 master-0 kubenswrapper[4430]: I1203 14:15:07.252858 4430 scope.go:117] "RemoveContainer" containerID="749d4a97321672e94f0f4d6c55d7fa485dfbd3bbe5480f2c579faa82f311605b" Dec 03 14:15:07.253129 master-0 kubenswrapper[4430]: I1203 14:15:07.253070 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:07.270019 master-0 kubenswrapper[4430]: I1203 14:15:07.269972 4430 scope.go:117] "RemoveContainer" containerID="aa440bd50b25afd3bbdcd911eb6ddd2cb8d5f29270fc9664a389f142c4f8cf24" Dec 03 14:15:07.283779 master-0 kubenswrapper[4430]: I1203 14:15:07.283650 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.284142 master-0 kubenswrapper[4430]: I1203 14:15:07.284109 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5aa2d6b41f5e21a89224256dc48af14" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.284573 master-0 kubenswrapper[4430]: I1203 14:15:07.284532 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.285023 master-0 kubenswrapper[4430]: I1203 14:15:07.284981 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:07.288408 master-0 kubenswrapper[4430]: I1203 14:15:07.288382 4430 scope.go:117] "RemoveContainer" containerID="e4e74143105a836ab029b335e356e20dcf63f1dfd4df0559287d53a803dfe9b1" Dec 03 14:15:07.309832 master-0 kubenswrapper[4430]: I1203 14:15:07.309774 4430 scope.go:117] "RemoveContainer" containerID="353ef5bad57ce46db98c0549f921ee8f0ee580567553f3ba9950d113638096f2" Dec 03 14:15:07.324953 master-0 kubenswrapper[4430]: I1203 14:15:07.324893 4430 scope.go:117] "RemoveContainer" containerID="1676af95112121a9e343fac781d61b54d4f18bb5d03944dc4409d844ba4c9c5e" Dec 03 14:15:07.344177 master-0 kubenswrapper[4430]: I1203 14:15:07.344128 4430 scope.go:117] "RemoveContainer" containerID="cc112e6842d5a1677f57d5cb903a1e5d6f4646550a794d787fb3ec9cc8aeb9a3" Dec 03 14:15:07.593554 master-0 kubenswrapper[4430]: I1203 14:15:07.593309 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5aa2d6b41f5e21a89224256dc48af14" path="/var/lib/kubelet/pods/f5aa2d6b41f5e21a89224256dc48af14/volumes" Dec 03 14:15:08.208823 master-0 kubenswrapper[4430]: E1203 14:15:08.208769 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:15:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:15:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:15:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:15:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:08.209875 master-0 kubenswrapper[4430]: E1203 14:15:08.209857 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:08.210686 master-0 kubenswrapper[4430]: E1203 14:15:08.210624 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:08.211386 master-0 kubenswrapper[4430]: E1203 14:15:08.211346 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:08.211967 master-0 kubenswrapper[4430]: E1203 14:15:08.211937 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:08.211967 master-0 kubenswrapper[4430]: E1203 14:15:08.211960 4430 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 14:15:09.588693 master-0 kubenswrapper[4430]: I1203 14:15:09.588576 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.589703 master-0 kubenswrapper[4430]: I1203 14:15:09.589353 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.590733 master-0 kubenswrapper[4430]: I1203 14:15:09.590158 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.673673 master-0 kubenswrapper[4430]: E1203 14:15:09.673610 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.674490 master-0 kubenswrapper[4430]: E1203 14:15:09.674264 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.674855 master-0 kubenswrapper[4430]: E1203 14:15:09.674751 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.675262 master-0 kubenswrapper[4430]: E1203 14:15:09.675228 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.675828 master-0 kubenswrapper[4430]: E1203 14:15:09.675772 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:09.675828 master-0 kubenswrapper[4430]: I1203 14:15:09.675804 4430 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 14:15:09.676360 master-0 kubenswrapper[4430]: E1203 14:15:09.676195 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 14:15:09.877520 master-0 kubenswrapper[4430]: E1203 14:15:09.877363 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 14:15:10.278698 master-0 kubenswrapper[4430]: E1203 14:15:10.278608 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 14:15:11.081242 master-0 kubenswrapper[4430]: E1203 14:15:11.081102 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 14:15:12.722181 master-0 kubenswrapper[4430]: E1203 14:15:12.719309 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Dec 03 14:15:12.723191 master-0 kubenswrapper[4430]: E1203 14:15:12.722396 4430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.187dba266a74080d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:f08d11be0e2919664ff2ea4b2440d0e0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:15:03.156615181 +0000 UTC m=+403.779529257,LastTimestamp:2025-12-03 14:15:03.156615181 +0000 UTC m=+403.779529257,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:15:15.584087 master-0 kubenswrapper[4430]: I1203 14:15:15.584014 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:15.586875 master-0 kubenswrapper[4430]: I1203 14:15:15.586775 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:15.587688 master-0 kubenswrapper[4430]: I1203 14:15:15.587611 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:15.589347 master-0 kubenswrapper[4430]: I1203 14:15:15.588767 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:15.608903 master-0 kubenswrapper[4430]: I1203 14:15:15.608847 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:15.608903 master-0 kubenswrapper[4430]: I1203 14:15:15.608896 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:15.610197 master-0 kubenswrapper[4430]: E1203 14:15:15.610124 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:15.612180 master-0 kubenswrapper[4430]: I1203 14:15:15.612135 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:15.638800 master-0 kubenswrapper[4430]: W1203 14:15:15.638712 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a00233b22d19df39b2e1c8ba133b3c2.slice/crio-d03b1523783c9640878014472eb210c8fcba07b2cfb379d7e1a3dbf16a936b78 WatchSource:0}: Error finding container d03b1523783c9640878014472eb210c8fcba07b2cfb379d7e1a3dbf16a936b78: Status 404 returned error can't find the container with id d03b1523783c9640878014472eb210c8fcba07b2cfb379d7e1a3dbf16a936b78 Dec 03 14:15:15.995468 master-0 kubenswrapper[4430]: E1203 14:15:15.995402 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Dec 03 14:15:16.334022 master-0 kubenswrapper[4430]: I1203 14:15:16.333875 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a" exitCode=0 Dec 03 14:15:16.334022 master-0 kubenswrapper[4430]: I1203 14:15:16.333974 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a"} Dec 03 14:15:16.334273 master-0 kubenswrapper[4430]: I1203 14:15:16.334041 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"d03b1523783c9640878014472eb210c8fcba07b2cfb379d7e1a3dbf16a936b78"} Dec 03 14:15:16.334452 master-0 kubenswrapper[4430]: I1203 14:15:16.334430 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:16.334526 master-0 kubenswrapper[4430]: I1203 14:15:16.334463 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:16.335754 master-0 kubenswrapper[4430]: I1203 14:15:16.335621 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.335883 master-0 kubenswrapper[4430]: E1203 14:15:16.335767 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:16.337143 master-0 kubenswrapper[4430]: I1203 14:15:16.337030 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.338470 master-0 kubenswrapper[4430]: I1203 14:15:16.338391 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.339527 master-0 kubenswrapper[4430]: I1203 14:15:16.339463 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8/installer/0.log" Dec 03 14:15:16.339625 master-0 kubenswrapper[4430]: I1203 14:15:16.339571 4430 generic.go:334] "Generic (PLEG): container finished" podID="bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" containerID="f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9" exitCode=1 Dec 03 14:15:16.339683 master-0 kubenswrapper[4430]: I1203 14:15:16.339625 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8","Type":"ContainerDied","Data":"f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9"} Dec 03 14:15:16.341093 master-0 kubenswrapper[4430]: I1203 14:15:16.341007 4430 status_manager.go:851] "Failed to get status for pod" podUID="bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.342130 master-0 kubenswrapper[4430]: I1203 14:15:16.342040 4430 status_manager.go:851] "Failed to get status for pod" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/collect-profiles-29412855-jmbvv\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.342857 master-0 kubenswrapper[4430]: I1203 14:15:16.342806 4430 status_manager.go:851] "Failed to get status for pod" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:16.343696 master-0 kubenswrapper[4430]: I1203 14:15:16.343645 4430 status_manager.go:851] "Failed to get status for pod" podUID="f08d11be0e2919664ff2ea4b2440d0e0" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:15:17.356460 master-0 kubenswrapper[4430]: I1203 14:15:17.355969 4430 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7" exitCode=1 Dec 03 14:15:17.356460 master-0 kubenswrapper[4430]: I1203 14:15:17.356063 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerDied","Data":"dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7"} Dec 03 14:15:17.356460 master-0 kubenswrapper[4430]: I1203 14:15:17.356200 4430 scope.go:117] "RemoveContainer" containerID="1113e5b1c4d5e0ffa93e620a7c8bd750851fb954030c8b620205a79268644060" Dec 03 14:15:17.357129 master-0 kubenswrapper[4430]: I1203 14:15:17.356985 4430 scope.go:117] "RemoveContainer" containerID="dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7" Dec 03 14:15:17.370566 master-0 kubenswrapper[4430]: I1203 14:15:17.370515 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01"} Dec 03 14:15:17.370691 master-0 kubenswrapper[4430]: I1203 14:15:17.370582 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3"} Dec 03 14:15:17.370691 master-0 kubenswrapper[4430]: I1203 14:15:17.370602 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4"} Dec 03 14:15:17.370691 master-0 kubenswrapper[4430]: I1203 14:15:17.370614 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007"} Dec 03 14:15:17.530495 master-0 kubenswrapper[4430]: I1203 14:15:17.525011 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:15:17.717410 master-0 kubenswrapper[4430]: I1203 14:15:17.717380 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8/installer/0.log" Dec 03 14:15:17.717545 master-0 kubenswrapper[4430]: I1203 14:15:17.717518 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:15:17.727916 master-0 kubenswrapper[4430]: I1203 14:15:17.727873 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock\") pod \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " Dec 03 14:15:17.727992 master-0 kubenswrapper[4430]: I1203 14:15:17.727981 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access\") pod \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " Dec 03 14:15:17.728035 master-0 kubenswrapper[4430]: I1203 14:15:17.728000 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir\") pod \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\" (UID: \"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\") " Dec 03 14:15:17.728139 master-0 kubenswrapper[4430]: I1203 14:15:17.728067 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock" (OuterVolumeSpecName: "var-lock") pod "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" (UID: "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:17.728269 master-0 kubenswrapper[4430]: I1203 14:15:17.728225 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" (UID: "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:17.728315 master-0 kubenswrapper[4430]: I1203 14:15:17.728297 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:17.731638 master-0 kubenswrapper[4430]: I1203 14:15:17.731592 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" (UID: "bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:15:17.830173 master-0 kubenswrapper[4430]: I1203 14:15:17.829946 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:17.830173 master-0 kubenswrapper[4430]: I1203 14:15:17.830018 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:18.380914 master-0 kubenswrapper[4430]: I1203 14:15:18.380816 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"7bce50c457ac1f4721bc81a570dd238a","Type":"ContainerStarted","Data":"4cf8b9de739b42d0326a9c91865874c6acc457ec4a815cac41e5776a7dc74502"} Dec 03 14:15:18.384977 master-0 kubenswrapper[4430]: I1203 14:15:18.384922 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc"} Dec 03 14:15:18.385113 master-0 kubenswrapper[4430]: I1203 14:15:18.385056 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:18.385233 master-0 kubenswrapper[4430]: I1203 14:15:18.385209 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:18.385283 master-0 kubenswrapper[4430]: I1203 14:15:18.385237 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:18.386995 master-0 kubenswrapper[4430]: I1203 14:15:18.386970 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8/installer/0.log" Dec 03 14:15:18.387125 master-0 kubenswrapper[4430]: I1203 14:15:18.387028 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8","Type":"ContainerDied","Data":"a9fb0b8ef0c1ee9efc43619bd47da67e35b642a4613e53d76b6b1a73738e05e5"} Dec 03 14:15:18.387125 master-0 kubenswrapper[4430]: I1203 14:15:18.387073 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fb0b8ef0c1ee9efc43619bd47da67e35b642a4613e53d76b6b1a73738e05e5" Dec 03 14:15:18.387187 master-0 kubenswrapper[4430]: I1203 14:15:18.387151 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:15:18.912527 master-0 kubenswrapper[4430]: I1203 14:15:18.912441 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:15:19.189180 master-0 kubenswrapper[4430]: I1203 14:15:19.188881 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:15:19.736934 master-0 kubenswrapper[4430]: I1203 14:15:19.736839 4430 scope.go:117] "RemoveContainer" containerID="c01edad1db506ce1a440eec485368dc53175e475c8c14d77a9938e14bf9c40c8" Dec 03 14:15:19.818170 master-0 kubenswrapper[4430]: E1203 14:15:19.818060 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a\": container with ID starting with 921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a not found: ID does not exist" containerID="921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a" Dec 03 14:15:19.818493 master-0 kubenswrapper[4430]: I1203 14:15:19.818156 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a" err="rpc error: code = NotFound desc = could not find container \"921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a\": container with ID starting with 921d89e78fcda04fd1ea5c6b02f95a5fdf5cfffa6b5f5c030dedc0601531019a not found: ID does not exist" Dec 03 14:15:20.413460 master-0 kubenswrapper[4430]: I1203 14:15:20.412615 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_3165c60b-3cd2-4bda-8c55-aecf00bef18d/installer/0.log" Dec 03 14:15:20.413460 master-0 kubenswrapper[4430]: I1203 14:15:20.412676 4430 generic.go:334] "Generic (PLEG): container finished" podID="3165c60b-3cd2-4bda-8c55-aecf00bef18d" containerID="b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60" exitCode=1 Dec 03 14:15:20.416198 master-0 kubenswrapper[4430]: I1203 14:15:20.414459 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3165c60b-3cd2-4bda-8c55-aecf00bef18d","Type":"ContainerDied","Data":"b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60"} Dec 03 14:15:20.613432 master-0 kubenswrapper[4430]: I1203 14:15:20.613341 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:20.613432 master-0 kubenswrapper[4430]: I1203 14:15:20.613441 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:20.619226 master-0 kubenswrapper[4430]: I1203 14:15:20.619172 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:21.780668 master-0 kubenswrapper[4430]: I1203 14:15:21.780627 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_3165c60b-3cd2-4bda-8c55-aecf00bef18d/installer/0.log" Dec 03 14:15:21.781500 master-0 kubenswrapper[4430]: I1203 14:15:21.781480 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:15:21.912868 master-0 kubenswrapper[4430]: I1203 14:15:21.912579 4430 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:15:21.939220 master-0 kubenswrapper[4430]: I1203 14:15:21.939140 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access\") pod \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " Dec 03 14:15:21.939602 master-0 kubenswrapper[4430]: I1203 14:15:21.939286 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir\") pod \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " Dec 03 14:15:21.939602 master-0 kubenswrapper[4430]: I1203 14:15:21.939355 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock\") pod \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\" (UID: \"3165c60b-3cd2-4bda-8c55-aecf00bef18d\") " Dec 03 14:15:21.939602 master-0 kubenswrapper[4430]: I1203 14:15:21.939524 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3165c60b-3cd2-4bda-8c55-aecf00bef18d" (UID: "3165c60b-3cd2-4bda-8c55-aecf00bef18d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:21.939602 master-0 kubenswrapper[4430]: I1203 14:15:21.939546 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock" (OuterVolumeSpecName: "var-lock") pod "3165c60b-3cd2-4bda-8c55-aecf00bef18d" (UID: "3165c60b-3cd2-4bda-8c55-aecf00bef18d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:15:21.940217 master-0 kubenswrapper[4430]: I1203 14:15:21.940182 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:21.940217 master-0 kubenswrapper[4430]: I1203 14:15:21.940217 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3165c60b-3cd2-4bda-8c55-aecf00bef18d-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:21.943023 master-0 kubenswrapper[4430]: I1203 14:15:21.942983 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3165c60b-3cd2-4bda-8c55-aecf00bef18d" (UID: "3165c60b-3cd2-4bda-8c55-aecf00bef18d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:15:22.045052 master-0 kubenswrapper[4430]: I1203 14:15:22.044885 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3165c60b-3cd2-4bda-8c55-aecf00bef18d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:15:22.429991 master-0 kubenswrapper[4430]: I1203 14:15:22.429875 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_3165c60b-3cd2-4bda-8c55-aecf00bef18d/installer/0.log" Dec 03 14:15:22.429991 master-0 kubenswrapper[4430]: I1203 14:15:22.429946 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"3165c60b-3cd2-4bda-8c55-aecf00bef18d","Type":"ContainerDied","Data":"f1bc4d5009d61af02ec9ccbe405ff8349082a9b82cb2edd836029964295a19b2"} Dec 03 14:15:22.430269 master-0 kubenswrapper[4430]: I1203 14:15:22.430007 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bc4d5009d61af02ec9ccbe405ff8349082a9b82cb2edd836029964295a19b2" Dec 03 14:15:22.430269 master-0 kubenswrapper[4430]: I1203 14:15:22.430025 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:15:23.411380 master-0 kubenswrapper[4430]: I1203 14:15:23.410637 4430 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:23.437692 master-0 kubenswrapper[4430]: I1203 14:15:23.437608 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:23.437692 master-0 kubenswrapper[4430]: I1203 14:15:23.437677 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:23.441837 master-0 kubenswrapper[4430]: I1203 14:15:23.441792 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:23.445315 master-0 kubenswrapper[4430]: I1203 14:15:23.445261 4430 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="8a00233b22d19df39b2e1c8ba133b3c2" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:15:24.444428 master-0 kubenswrapper[4430]: I1203 14:15:24.444345 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:24.444428 master-0 kubenswrapper[4430]: I1203 14:15:24.444399 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:28.918670 master-0 kubenswrapper[4430]: I1203 14:15:28.918512 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:15:28.924734 master-0 kubenswrapper[4430]: I1203 14:15:28.924676 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:15:29.604469 master-0 kubenswrapper[4430]: I1203 14:15:29.604400 4430 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="8a00233b22d19df39b2e1c8ba133b3c2" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:15:30.526409 master-0 kubenswrapper[4430]: E1203 14:15:30.526268 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[telemeter-client-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:15:31.491671 master-0 kubenswrapper[4430]: I1203 14:15:31.491629 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:15:33.605302 master-0 kubenswrapper[4430]: I1203 14:15:33.605254 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 14:15:33.808744 master-0 kubenswrapper[4430]: I1203 14:15:33.808695 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 14:15:33.979845 master-0 kubenswrapper[4430]: I1203 14:15:33.979780 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 03 14:15:34.036835 master-0 kubenswrapper[4430]: I1203 14:15:34.036791 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 14:15:34.425307 master-0 kubenswrapper[4430]: I1203 14:15:34.425162 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:15:34.513755 master-0 kubenswrapper[4430]: I1203 14:15:34.513692 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 03 14:15:34.900846 master-0 kubenswrapper[4430]: I1203 14:15:34.900788 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 14:15:34.903287 master-0 kubenswrapper[4430]: I1203 14:15:34.903256 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 14:15:34.929501 master-0 kubenswrapper[4430]: I1203 14:15:34.929452 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:15:35.116175 master-0 kubenswrapper[4430]: I1203 14:15:35.116130 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:15:35.251931 master-0 kubenswrapper[4430]: I1203 14:15:35.251771 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:15:35.268694 master-0 kubenswrapper[4430]: I1203 14:15:35.268614 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:15:35.496193 master-0 kubenswrapper[4430]: I1203 14:15:35.496139 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:15:35.659110 master-0 kubenswrapper[4430]: I1203 14:15:35.659018 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 14:15:35.659515 master-0 kubenswrapper[4430]: I1203 14:15:35.659389 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:15:35.660484 master-0 kubenswrapper[4430]: I1203 14:15:35.660445 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:15:35.678077 master-0 kubenswrapper[4430]: I1203 14:15:35.677942 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:15:35.684123 master-0 kubenswrapper[4430]: I1203 14:15:35.684071 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:15:35.696190 master-0 kubenswrapper[4430]: I1203 14:15:35.696132 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-fwsd5" Dec 03 14:15:35.703980 master-0 kubenswrapper[4430]: I1203 14:15:35.703926 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:15:35.723663 master-0 kubenswrapper[4430]: I1203 14:15:35.723586 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 03 14:15:35.942229 master-0 kubenswrapper[4430]: I1203 14:15:35.939929 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:15:35.985616 master-0 kubenswrapper[4430]: I1203 14:15:35.985561 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 14:15:36.081639 master-0 kubenswrapper[4430]: I1203 14:15:36.081500 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 14:15:36.180516 master-0 kubenswrapper[4430]: I1203 14:15:36.180462 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 03 14:15:36.200135 master-0 kubenswrapper[4430]: I1203 14:15:36.200066 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 14:15:36.249982 master-0 kubenswrapper[4430]: I1203 14:15:36.249914 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 14:15:36.448056 master-0 kubenswrapper[4430]: I1203 14:15:36.447911 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:15:36.448537 master-0 kubenswrapper[4430]: I1203 14:15:36.448515 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 14:15:36.553215 master-0 kubenswrapper[4430]: I1203 14:15:36.553160 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 14:15:36.561237 master-0 kubenswrapper[4430]: I1203 14:15:36.561201 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 14:15:36.729511 master-0 kubenswrapper[4430]: I1203 14:15:36.729436 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 14:15:36.820208 master-0 kubenswrapper[4430]: I1203 14:15:36.820132 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:15:36.877559 master-0 kubenswrapper[4430]: I1203 14:15:36.877463 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 03 14:15:36.908880 master-0 kubenswrapper[4430]: I1203 14:15:36.908661 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:15:37.049448 master-0 kubenswrapper[4430]: I1203 14:15:37.049372 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:15:37.127863 master-0 kubenswrapper[4430]: I1203 14:15:37.127783 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 14:15:37.339147 master-0 kubenswrapper[4430]: I1203 14:15:37.338996 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 14:15:37.380145 master-0 kubenswrapper[4430]: I1203 14:15:37.380091 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 03 14:15:37.394181 master-0 kubenswrapper[4430]: I1203 14:15:37.394129 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 03 14:15:37.426516 master-0 kubenswrapper[4430]: I1203 14:15:37.426462 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 14:15:37.429188 master-0 kubenswrapper[4430]: I1203 14:15:37.429161 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 14:15:37.496312 master-0 kubenswrapper[4430]: I1203 14:15:37.496244 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 14:15:37.531282 master-0 kubenswrapper[4430]: I1203 14:15:37.531228 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 14:15:37.534289 master-0 kubenswrapper[4430]: I1203 14:15:37.534256 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 14:15:37.574564 master-0 kubenswrapper[4430]: I1203 14:15:37.574486 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:15:37.601877 master-0 kubenswrapper[4430]: I1203 14:15:37.601742 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 14:15:37.604559 master-0 kubenswrapper[4430]: I1203 14:15:37.604517 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 14:15:37.682217 master-0 kubenswrapper[4430]: I1203 14:15:37.682147 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"847952d729eb349f022619193276a8d9a254cda0e57f105adcc7c0ffe39e8719"} Dec 03 14:15:37.723092 master-0 kubenswrapper[4430]: I1203 14:15:37.722916 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 03 14:15:37.723350 master-0 kubenswrapper[4430]: I1203 14:15:37.723152 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 14:15:37.867799 master-0 kubenswrapper[4430]: I1203 14:15:37.867684 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:15:37.892517 master-0 kubenswrapper[4430]: I1203 14:15:37.892476 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 14:15:37.947944 master-0 kubenswrapper[4430]: I1203 14:15:37.947891 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:15:37.979929 master-0 kubenswrapper[4430]: I1203 14:15:37.979882 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:15:38.066458 master-0 kubenswrapper[4430]: I1203 14:15:38.064268 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 14:15:38.128192 master-0 kubenswrapper[4430]: I1203 14:15:38.128029 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 14:15:38.280057 master-0 kubenswrapper[4430]: I1203 14:15:38.279991 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 14:15:38.291257 master-0 kubenswrapper[4430]: I1203 14:15:38.291201 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:15:38.411760 master-0 kubenswrapper[4430]: I1203 14:15:38.411595 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:15:38.422752 master-0 kubenswrapper[4430]: I1203 14:15:38.422414 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 14:15:38.491301 master-0 kubenswrapper[4430]: I1203 14:15:38.491245 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:15:38.580362 master-0 kubenswrapper[4430]: I1203 14:15:38.579209 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 14:15:38.644788 master-0 kubenswrapper[4430]: I1203 14:15:38.644729 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 14:15:38.738465 master-0 kubenswrapper[4430]: I1203 14:15:38.738403 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 03 14:15:38.822923 master-0 kubenswrapper[4430]: I1203 14:15:38.822855 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:15:38.828900 master-0 kubenswrapper[4430]: I1203 14:15:38.828856 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:15:38.831904 master-0 kubenswrapper[4430]: I1203 14:15:38.831687 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:15:38.860487 master-0 kubenswrapper[4430]: I1203 14:15:38.852700 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-cb9jg" Dec 03 14:15:38.865149 master-0 kubenswrapper[4430]: I1203 14:15:38.865087 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:15:38.936940 master-0 kubenswrapper[4430]: I1203 14:15:38.934789 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:15:38.960855 master-0 kubenswrapper[4430]: I1203 14:15:38.960768 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 14:15:38.991279 master-0 kubenswrapper[4430]: I1203 14:15:38.991157 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:15:38.993266 master-0 kubenswrapper[4430]: I1203 14:15:38.992847 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 03 14:15:39.089648 master-0 kubenswrapper[4430]: I1203 14:15:39.089584 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:15:39.112360 master-0 kubenswrapper[4430]: I1203 14:15:39.112301 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 14:15:39.119305 master-0 kubenswrapper[4430]: I1203 14:15:39.119272 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 14:15:39.121572 master-0 kubenswrapper[4430]: I1203 14:15:39.120686 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 14:15:39.176679 master-0 kubenswrapper[4430]: I1203 14:15:39.175110 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 14:15:39.316763 master-0 kubenswrapper[4430]: I1203 14:15:39.316635 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 14:15:39.317022 master-0 kubenswrapper[4430]: I1203 14:15:39.316754 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:15:39.341442 master-0 kubenswrapper[4430]: I1203 14:15:39.340707 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 14:15:39.348604 master-0 kubenswrapper[4430]: I1203 14:15:39.346020 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 14:15:39.363686 master-0 kubenswrapper[4430]: I1203 14:15:39.363627 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 03 14:15:39.369767 master-0 kubenswrapper[4430]: I1203 14:15:39.369720 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:15:39.472520 master-0 kubenswrapper[4430]: I1203 14:15:39.472466 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-js47f" Dec 03 14:15:39.503774 master-0 kubenswrapper[4430]: I1203 14:15:39.503727 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:15:39.523375 master-0 kubenswrapper[4430]: I1203 14:15:39.523319 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:15:39.540120 master-0 kubenswrapper[4430]: I1203 14:15:39.540071 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7n524" Dec 03 14:15:39.545063 master-0 kubenswrapper[4430]: I1203 14:15:39.545014 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Dec 03 14:15:39.562410 master-0 kubenswrapper[4430]: I1203 14:15:39.562355 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:15:39.570307 master-0 kubenswrapper[4430]: I1203 14:15:39.570183 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 03 14:15:39.571907 master-0 kubenswrapper[4430]: I1203 14:15:39.571866 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 14:15:39.581117 master-0 kubenswrapper[4430]: I1203 14:15:39.581066 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 14:15:39.599493 master-0 kubenswrapper[4430]: I1203 14:15:39.599404 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 14:15:39.603628 master-0 kubenswrapper[4430]: I1203 14:15:39.603569 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 14:15:39.631804 master-0 kubenswrapper[4430]: I1203 14:15:39.631728 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 14:15:39.654766 master-0 kubenswrapper[4430]: I1203 14:15:39.654693 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:15:39.692175 master-0 kubenswrapper[4430]: I1203 14:15:39.692111 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 14:15:39.740293 master-0 kubenswrapper[4430]: I1203 14:15:39.740230 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 14:15:39.744642 master-0 kubenswrapper[4430]: I1203 14:15:39.744509 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 14:15:39.785439 master-0 kubenswrapper[4430]: I1203 14:15:39.785210 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l6rgr" Dec 03 14:15:39.822369 master-0 kubenswrapper[4430]: I1203 14:15:39.822227 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bc14vqi7sofg" Dec 03 14:15:39.831167 master-0 kubenswrapper[4430]: I1203 14:15:39.831116 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 14:15:39.832413 master-0 kubenswrapper[4430]: I1203 14:15:39.832375 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 14:15:39.907641 master-0 kubenswrapper[4430]: I1203 14:15:39.907565 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 14:15:39.907976 master-0 kubenswrapper[4430]: I1203 14:15:39.907944 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:15:39.922837 master-0 kubenswrapper[4430]: I1203 14:15:39.922785 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 14:15:39.928893 master-0 kubenswrapper[4430]: I1203 14:15:39.928846 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:15:39.971030 master-0 kubenswrapper[4430]: I1203 14:15:39.966701 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:15:39.984395 master-0 kubenswrapper[4430]: I1203 14:15:39.984352 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 14:15:39.999168 master-0 kubenswrapper[4430]: I1203 14:15:39.999130 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 14:15:40.099826 master-0 kubenswrapper[4430]: I1203 14:15:40.099711 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" Dec 03 14:15:40.102107 master-0 kubenswrapper[4430]: I1203 14:15:40.102070 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 14:15:40.118573 master-0 kubenswrapper[4430]: I1203 14:15:40.118517 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 14:15:40.169941 master-0 kubenswrapper[4430]: I1203 14:15:40.169848 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 14:15:40.280612 master-0 kubenswrapper[4430]: I1203 14:15:40.280542 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:15:40.295905 master-0 kubenswrapper[4430]: I1203 14:15:40.295859 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 14:15:40.407403 master-0 kubenswrapper[4430]: I1203 14:15:40.407222 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 14:15:40.427373 master-0 kubenswrapper[4430]: I1203 14:15:40.427325 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 14:15:40.460505 master-0 kubenswrapper[4430]: I1203 14:15:40.460381 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:15:40.465009 master-0 kubenswrapper[4430]: I1203 14:15:40.463823 4430 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:15:40.504446 master-0 kubenswrapper[4430]: I1203 14:15:40.501981 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 14:15:40.518537 master-0 kubenswrapper[4430]: I1203 14:15:40.516059 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 14:15:40.566540 master-0 kubenswrapper[4430]: I1203 14:15:40.565429 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 14:15:40.619707 master-0 kubenswrapper[4430]: I1203 14:15:40.619641 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 14:15:40.648744 master-0 kubenswrapper[4430]: I1203 14:15:40.648686 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 14:15:40.660227 master-0 kubenswrapper[4430]: I1203 14:15:40.660193 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 14:15:40.700060 master-0 kubenswrapper[4430]: I1203 14:15:40.700005 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:15:40.760607 master-0 kubenswrapper[4430]: I1203 14:15:40.760547 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 14:15:40.777126 master-0 kubenswrapper[4430]: I1203 14:15:40.777066 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 14:15:40.820359 master-0 kubenswrapper[4430]: I1203 14:15:40.820277 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 14:15:40.830709 master-0 kubenswrapper[4430]: I1203 14:15:40.830664 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:15:40.850014 master-0 kubenswrapper[4430]: I1203 14:15:40.849951 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 14:15:40.880247 master-0 kubenswrapper[4430]: I1203 14:15:40.880137 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 14:15:40.894077 master-0 kubenswrapper[4430]: I1203 14:15:40.893964 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 14:15:40.937873 master-0 kubenswrapper[4430]: I1203 14:15:40.937643 4430 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:15:40.966962 master-0 kubenswrapper[4430]: I1203 14:15:40.966877 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:15:40.998483 master-0 kubenswrapper[4430]: I1203 14:15:40.998395 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 14:15:41.035271 master-0 kubenswrapper[4430]: I1203 14:15:41.035201 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Dec 03 14:15:41.130915 master-0 kubenswrapper[4430]: I1203 14:15:41.130835 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 14:15:41.157462 master-0 kubenswrapper[4430]: I1203 14:15:41.153532 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 14:15:41.215527 master-0 kubenswrapper[4430]: I1203 14:15:41.215308 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 14:15:41.300359 master-0 kubenswrapper[4430]: I1203 14:15:41.300289 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 14:15:41.332605 master-0 kubenswrapper[4430]: I1203 14:15:41.332540 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:15:41.341927 master-0 kubenswrapper[4430]: I1203 14:15:41.341866 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 14:15:41.350200 master-0 kubenswrapper[4430]: I1203 14:15:41.350135 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 14:15:41.592988 master-0 kubenswrapper[4430]: I1203 14:15:41.592919 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:15:41.601295 master-0 kubenswrapper[4430]: I1203 14:15:41.601229 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 03 14:15:41.667406 master-0 kubenswrapper[4430]: I1203 14:15:41.667362 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 14:15:41.686054 master-0 kubenswrapper[4430]: I1203 14:15:41.685996 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 14:15:41.701850 master-0 kubenswrapper[4430]: I1203 14:15:41.701766 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 14:15:41.714639 master-0 kubenswrapper[4430]: I1203 14:15:41.714592 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 03 14:15:41.760152 master-0 kubenswrapper[4430]: I1203 14:15:41.760083 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 14:15:41.777814 master-0 kubenswrapper[4430]: I1203 14:15:41.777771 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 14:15:41.785142 master-0 kubenswrapper[4430]: I1203 14:15:41.785048 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 14:15:41.798019 master-0 kubenswrapper[4430]: I1203 14:15:41.797954 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 14:15:41.798411 master-0 kubenswrapper[4430]: I1203 14:15:41.797966 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:15:41.833649 master-0 kubenswrapper[4430]: I1203 14:15:41.833578 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bdlwz" Dec 03 14:15:41.855203 master-0 kubenswrapper[4430]: I1203 14:15:41.855150 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:15:41.864232 master-0 kubenswrapper[4430]: I1203 14:15:41.864165 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 14:15:41.885965 master-0 kubenswrapper[4430]: I1203 14:15:41.885910 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 14:15:42.013748 master-0 kubenswrapper[4430]: I1203 14:15:42.013710 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 14:15:42.042752 master-0 kubenswrapper[4430]: I1203 14:15:42.042706 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:15:42.058369 master-0 kubenswrapper[4430]: I1203 14:15:42.058321 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:15:42.118406 master-0 kubenswrapper[4430]: I1203 14:15:42.118163 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 14:15:42.151965 master-0 kubenswrapper[4430]: I1203 14:15:42.151917 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 14:15:42.262523 master-0 kubenswrapper[4430]: I1203 14:15:42.262325 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 14:15:42.294931 master-0 kubenswrapper[4430]: I1203 14:15:42.294852 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 14:15:42.303808 master-0 kubenswrapper[4430]: I1203 14:15:42.303765 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 14:15:42.304365 master-0 kubenswrapper[4430]: I1203 14:15:42.304265 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 14:15:42.358380 master-0 kubenswrapper[4430]: I1203 14:15:42.356897 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:15:42.380150 master-0 kubenswrapper[4430]: I1203 14:15:42.380104 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 14:15:42.386269 master-0 kubenswrapper[4430]: I1203 14:15:42.386230 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 14:15:42.492255 master-0 kubenswrapper[4430]: I1203 14:15:42.492200 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 14:15:42.615399 master-0 kubenswrapper[4430]: I1203 14:15:42.615343 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 14:15:42.646050 master-0 kubenswrapper[4430]: I1203 14:15:42.645876 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8zh52" Dec 03 14:15:42.650069 master-0 kubenswrapper[4430]: I1203 14:15:42.649988 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 14:15:42.701646 master-0 kubenswrapper[4430]: I1203 14:15:42.701569 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:15:42.771322 master-0 kubenswrapper[4430]: I1203 14:15:42.771244 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 14:15:42.780126 master-0 kubenswrapper[4430]: I1203 14:15:42.780071 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2fgkw" Dec 03 14:15:42.931951 master-0 kubenswrapper[4430]: I1203 14:15:42.931083 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:15:42.944665 master-0 kubenswrapper[4430]: I1203 14:15:42.944627 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:15:43.142532 master-0 kubenswrapper[4430]: I1203 14:15:43.142005 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 03 14:15:43.168128 master-0 kubenswrapper[4430]: I1203 14:15:43.168060 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 14:15:43.170349 master-0 kubenswrapper[4430]: I1203 14:15:43.170316 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:15:43.172664 master-0 kubenswrapper[4430]: I1203 14:15:43.172645 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 14:15:43.232794 master-0 kubenswrapper[4430]: I1203 14:15:43.232713 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:15:43.236291 master-0 kubenswrapper[4430]: I1203 14:15:43.236232 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 14:15:43.258542 master-0 kubenswrapper[4430]: I1203 14:15:43.258465 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 14:15:43.287936 master-0 kubenswrapper[4430]: I1203 14:15:43.287882 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:15:43.306857 master-0 kubenswrapper[4430]: I1203 14:15:43.306807 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 14:15:43.342095 master-0 kubenswrapper[4430]: I1203 14:15:43.342040 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 14:15:43.362449 master-0 kubenswrapper[4430]: I1203 14:15:43.362380 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 14:15:43.368083 master-0 kubenswrapper[4430]: I1203 14:15:43.367811 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 14:15:43.485342 master-0 kubenswrapper[4430]: I1203 14:15:43.485150 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 14:15:43.499708 master-0 kubenswrapper[4430]: I1203 14:15:43.499667 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 14:15:43.680031 master-0 kubenswrapper[4430]: I1203 14:15:43.679960 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 14:15:43.739693 master-0 kubenswrapper[4430]: I1203 14:15:43.739520 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nqkqh" Dec 03 14:15:43.741307 master-0 kubenswrapper[4430]: I1203 14:15:43.739793 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"ae814fc15ad121d6706b68a31e9d8c53b23ad4825832da984dbf729ef7765a86"} Dec 03 14:15:43.752197 master-0 kubenswrapper[4430]: I1203 14:15:43.752150 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:15:43.765378 master-0 kubenswrapper[4430]: I1203 14:15:43.764889 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-n8h5v" Dec 03 14:15:43.816468 master-0 kubenswrapper[4430]: I1203 14:15:43.816390 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 14:15:43.865322 master-0 kubenswrapper[4430]: I1203 14:15:43.864617 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-2blfd" Dec 03 14:15:43.901604 master-0 kubenswrapper[4430]: I1203 14:15:43.901533 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:15:43.928921 master-0 kubenswrapper[4430]: I1203 14:15:43.928876 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:15:43.931504 master-0 kubenswrapper[4430]: I1203 14:15:43.931473 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 14:15:43.958655 master-0 kubenswrapper[4430]: I1203 14:15:43.958587 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:15:43.968245 master-0 kubenswrapper[4430]: I1203 14:15:43.965844 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 14:15:43.972443 master-0 kubenswrapper[4430]: I1203 14:15:43.972087 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 14:15:43.977271 master-0 kubenswrapper[4430]: I1203 14:15:43.976783 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 14:15:44.030975 master-0 kubenswrapper[4430]: I1203 14:15:44.030894 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:15:44.120159 master-0 kubenswrapper[4430]: I1203 14:15:44.120101 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 14:15:44.185504 master-0 kubenswrapper[4430]: I1203 14:15:44.185398 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:15:44.211780 master-0 kubenswrapper[4430]: I1203 14:15:44.211719 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-jmtqw" Dec 03 14:15:44.218467 master-0 kubenswrapper[4430]: I1203 14:15:44.218402 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 14:15:44.285190 master-0 kubenswrapper[4430]: I1203 14:15:44.285059 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 14:15:44.296629 master-0 kubenswrapper[4430]: I1203 14:15:44.296580 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 14:15:44.406281 master-0 kubenswrapper[4430]: I1203 14:15:44.406219 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:15:44.416272 master-0 kubenswrapper[4430]: I1203 14:15:44.416225 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 14:15:44.437327 master-0 kubenswrapper[4430]: I1203 14:15:44.437283 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 03 14:15:44.480769 master-0 kubenswrapper[4430]: I1203 14:15:44.480712 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:15:44.501288 master-0 kubenswrapper[4430]: I1203 14:15:44.501233 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 14:15:44.579793 master-0 kubenswrapper[4430]: I1203 14:15:44.579638 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:15:44.631133 master-0 kubenswrapper[4430]: I1203 14:15:44.631077 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 14:15:44.717533 master-0 kubenswrapper[4430]: I1203 14:15:44.717488 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:15:44.794908 master-0 kubenswrapper[4430]: I1203 14:15:44.794822 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:15:44.837194 master-0 kubenswrapper[4430]: I1203 14:15:44.837055 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:15:44.859819 master-0 kubenswrapper[4430]: I1203 14:15:44.859760 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 14:15:44.916559 master-0 kubenswrapper[4430]: I1203 14:15:44.916500 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 14:15:44.955980 master-0 kubenswrapper[4430]: I1203 14:15:44.955928 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:15:44.989979 master-0 kubenswrapper[4430]: I1203 14:15:44.989907 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:15:45.021389 master-0 kubenswrapper[4430]: I1203 14:15:45.021345 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 14:15:45.089128 master-0 kubenswrapper[4430]: I1203 14:15:45.088921 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 14:15:45.114212 master-0 kubenswrapper[4430]: I1203 14:15:45.114140 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 03 14:15:45.142374 master-0 kubenswrapper[4430]: I1203 14:15:45.138601 4430 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:15:45.201166 master-0 kubenswrapper[4430]: I1203 14:15:45.201090 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:15:45.277529 master-0 kubenswrapper[4430]: I1203 14:15:45.277450 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 14:15:45.338527 master-0 kubenswrapper[4430]: I1203 14:15:45.338406 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Dec 03 14:15:45.390989 master-0 kubenswrapper[4430]: I1203 14:15:45.390835 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 14:15:45.398038 master-0 kubenswrapper[4430]: I1203 14:15:45.397941 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:15:45.462043 master-0 kubenswrapper[4430]: I1203 14:15:45.461970 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:15:45.500805 master-0 kubenswrapper[4430]: I1203 14:15:45.500736 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 14:15:45.542400 master-0 kubenswrapper[4430]: I1203 14:15:45.541859 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 14:15:45.569206 master-0 kubenswrapper[4430]: I1203 14:15:45.569147 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 14:15:45.607440 master-0 kubenswrapper[4430]: I1203 14:15:45.606438 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:15:45.652035 master-0 kubenswrapper[4430]: I1203 14:15:45.651911 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:15:45.797056 master-0 kubenswrapper[4430]: I1203 14:15:45.796973 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 14:15:45.833643 master-0 kubenswrapper[4430]: I1203 14:15:45.833576 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:15:45.837974 master-0 kubenswrapper[4430]: I1203 14:15:45.837873 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 14:15:45.920601 master-0 kubenswrapper[4430]: I1203 14:15:45.920478 4430 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:15:45.921094 master-0 kubenswrapper[4430]: I1203 14:15:45.921072 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Dec 03 14:15:45.934191 master-0 kubenswrapper[4430]: I1203 14:15:45.934130 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:15:45.959840 master-0 kubenswrapper[4430]: I1203 14:15:45.959789 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:15:45.961952 master-0 kubenswrapper[4430]: I1203 14:15:45.961915 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:15:45.980297 master-0 kubenswrapper[4430]: I1203 14:15:45.980225 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 14:15:46.141213 master-0 kubenswrapper[4430]: I1203 14:15:46.141146 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:15:46.260016 master-0 kubenswrapper[4430]: I1203 14:15:46.259946 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 14:15:46.295536 master-0 kubenswrapper[4430]: I1203 14:15:46.295472 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:15:46.330630 master-0 kubenswrapper[4430]: I1203 14:15:46.330559 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:15:46.409641 master-0 kubenswrapper[4430]: I1203 14:15:46.409598 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 14:15:46.565286 master-0 kubenswrapper[4430]: I1203 14:15:46.565161 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 14:15:46.603572 master-0 kubenswrapper[4430]: I1203 14:15:46.603463 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 14:15:46.644747 master-0 kubenswrapper[4430]: I1203 14:15:46.644678 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:15:46.672495 master-0 kubenswrapper[4430]: I1203 14:15:46.672383 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 14:15:46.691166 master-0 kubenswrapper[4430]: I1203 14:15:46.691105 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 14:15:46.755262 master-0 kubenswrapper[4430]: I1203 14:15:46.755206 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 14:15:46.847402 master-0 kubenswrapper[4430]: I1203 14:15:46.847351 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 14:15:46.919643 master-0 kubenswrapper[4430]: I1203 14:15:46.919583 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 14:15:46.964099 master-0 kubenswrapper[4430]: I1203 14:15:46.964041 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 14:15:46.986284 master-0 kubenswrapper[4430]: I1203 14:15:46.986222 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 14:15:47.049257 master-0 kubenswrapper[4430]: I1203 14:15:47.049199 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:15:47.069229 master-0 kubenswrapper[4430]: I1203 14:15:47.069193 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:15:47.152435 master-0 kubenswrapper[4430]: I1203 14:15:47.152266 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 14:15:47.159592 master-0 kubenswrapper[4430]: I1203 14:15:47.159555 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 14:15:47.181909 master-0 kubenswrapper[4430]: I1203 14:15:47.181858 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 14:15:47.229795 master-0 kubenswrapper[4430]: I1203 14:15:47.229747 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:15:47.230175 master-0 kubenswrapper[4430]: I1203 14:15:47.230137 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:15:47.249077 master-0 kubenswrapper[4430]: I1203 14:15:47.249039 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 14:15:47.249307 master-0 kubenswrapper[4430]: I1203 14:15:47.249227 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 14:15:47.444741 master-0 kubenswrapper[4430]: I1203 14:15:47.444614 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 14:15:47.449866 master-0 kubenswrapper[4430]: I1203 14:15:47.449814 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 14:15:47.568031 master-0 kubenswrapper[4430]: I1203 14:15:47.567980 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" Dec 03 14:15:47.578342 master-0 kubenswrapper[4430]: I1203 14:15:47.578303 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:15:47.764863 master-0 kubenswrapper[4430]: I1203 14:15:47.764790 4430 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:15:47.770792 master-0 kubenswrapper[4430]: I1203 14:15:47.770727 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 14:15:47.771120 master-0 kubenswrapper[4430]: I1203 14:15:47.771062 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=45.771047108 podStartE2EDuration="45.771047108s" podCreationTimestamp="2025-12-03 14:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:15:23.048941032 +0000 UTC m=+423.671855128" watchObservedRunningTime="2025-12-03 14:15:47.771047108 +0000 UTC m=+448.393961184" Dec 03 14:15:47.772200 master-0 kubenswrapper[4430]: I1203 14:15:47.772144 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"ac7f4cd2def2bb496dc5f5aa1e8f39d0c213c9f9d0e8923d0950adbd07e9c37b"} Dec 03 14:15:47.775535 master-0 kubenswrapper[4430]: I1203 14:15:47.775493 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:15:47.775617 master-0 kubenswrapper[4430]: I1203 14:15:47.775556 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:15:47.775617 master-0 kubenswrapper[4430]: I1203 14:15:47.775599 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-764cbf5554-kftwv"] Dec 03 14:15:47.776617 master-0 kubenswrapper[4430]: I1203 14:15:47.776584 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:47.776617 master-0 kubenswrapper[4430]: I1203 14:15:47.776608 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="bf1c97e0-6347-4d35-9851-0abe86e172f4" Dec 03 14:15:47.807871 master-0 kubenswrapper[4430]: I1203 14:15:47.805675 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=24.805652836 podStartE2EDuration="24.805652836s" podCreationTimestamp="2025-12-03 14:15:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:15:47.798752273 +0000 UTC m=+448.421666359" watchObservedRunningTime="2025-12-03 14:15:47.805652836 +0000 UTC m=+448.428566912" Dec 03 14:15:47.818342 master-0 kubenswrapper[4430]: I1203 14:15:47.818291 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 03 14:15:47.929457 master-0 kubenswrapper[4430]: I1203 14:15:47.929377 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 14:15:48.218400 master-0 kubenswrapper[4430]: I1203 14:15:48.218364 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 14:15:48.253245 master-0 kubenswrapper[4430]: I1203 14:15:48.253182 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 14:15:48.256189 master-0 kubenswrapper[4430]: I1203 14:15:48.256161 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:15:48.310517 master-0 kubenswrapper[4430]: I1203 14:15:48.310456 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 14:15:48.421105 master-0 kubenswrapper[4430]: I1203 14:15:48.421054 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:15:48.490783 master-0 kubenswrapper[4430]: I1203 14:15:48.490648 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:15:48.493948 master-0 kubenswrapper[4430]: I1203 14:15:48.493905 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 14:15:48.515131 master-0 kubenswrapper[4430]: I1203 14:15:48.515048 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 14:15:48.525786 master-0 kubenswrapper[4430]: I1203 14:15:48.525734 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 14:15:48.784114 master-0 kubenswrapper[4430]: I1203 14:15:48.783964 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:15:48.906009 master-0 kubenswrapper[4430]: I1203 14:15:48.905935 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 14:15:49.100731 master-0 kubenswrapper[4430]: I1203 14:15:49.100599 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 14:15:49.162718 master-0 kubenswrapper[4430]: I1203 14:15:49.162641 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Dec 03 14:15:49.172369 master-0 kubenswrapper[4430]: I1203 14:15:49.172326 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 14:15:49.190716 master-0 kubenswrapper[4430]: I1203 14:15:49.190638 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 14:15:49.318961 master-0 kubenswrapper[4430]: I1203 14:15:49.318898 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 14:15:49.378186 master-0 kubenswrapper[4430]: I1203 14:15:49.378050 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:15:49.440261 master-0 kubenswrapper[4430]: I1203 14:15:49.440187 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 14:15:49.455035 master-0 kubenswrapper[4430]: I1203 14:15:49.454984 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 14:15:49.545448 master-0 kubenswrapper[4430]: I1203 14:15:49.545356 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:15:49.705510 master-0 kubenswrapper[4430]: I1203 14:15:49.705348 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:15:49.713765 master-0 kubenswrapper[4430]: I1203 14:15:49.713720 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:15:49.754311 master-0 kubenswrapper[4430]: I1203 14:15:49.754256 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-twpdm" Dec 03 14:15:49.838075 master-0 kubenswrapper[4430]: I1203 14:15:49.838008 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 14:15:50.092279 master-0 kubenswrapper[4430]: I1203 14:15:50.092224 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:15:50.172861 master-0 kubenswrapper[4430]: I1203 14:15:50.172789 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:15:50.223098 master-0 kubenswrapper[4430]: I1203 14:15:50.223053 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 14:15:50.389715 master-0 kubenswrapper[4430]: I1203 14:15:50.389557 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 14:15:50.497576 master-0 kubenswrapper[4430]: I1203 14:15:50.497510 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:15:50.500242 master-0 kubenswrapper[4430]: I1203 14:15:50.500208 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:15:50.530765 master-0 kubenswrapper[4430]: I1203 14:15:50.530714 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 14:15:50.603539 master-0 kubenswrapper[4430]: I1203 14:15:50.603478 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:15:50.772046 master-0 kubenswrapper[4430]: I1203 14:15:50.771954 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 14:15:50.801605 master-0 kubenswrapper[4430]: I1203 14:15:50.801510 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"5f404d464133aa8363203704d73365d24b12404b65044116fabfafd44fab495c"} Dec 03 14:15:50.842541 master-0 kubenswrapper[4430]: I1203 14:15:50.842383 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podStartSLOduration=140.069349264 podStartE2EDuration="2m23.842343728s" podCreationTimestamp="2025-12-03 14:13:27 +0000 UTC" firstStartedPulling="2025-12-03 14:15:36.889242263 +0000 UTC m=+437.512156339" lastFinishedPulling="2025-12-03 14:15:40.662236727 +0000 UTC m=+441.285150803" observedRunningTime="2025-12-03 14:15:50.828264504 +0000 UTC m=+451.451178580" watchObservedRunningTime="2025-12-03 14:15:50.842343728 +0000 UTC m=+451.465257844" Dec 03 14:15:50.934909 master-0 kubenswrapper[4430]: I1203 14:15:50.934708 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:15:50.980271 master-0 kubenswrapper[4430]: I1203 14:15:50.980181 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 14:15:51.926325 master-0 kubenswrapper[4430]: I1203 14:15:51.926279 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:15:52.268377 master-0 kubenswrapper[4430]: I1203 14:15:52.268318 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 14:15:57.069834 master-0 kubenswrapper[4430]: I1203 14:15:57.069785 4430 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:15:57.070620 master-0 kubenswrapper[4430]: I1203 14:15:57.070157 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="f08d11be0e2919664ff2ea4b2440d0e0" containerName="startup-monitor" containerID="cri-o://9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde" gracePeriod=5 Dec 03 14:16:02.789658 master-0 kubenswrapper[4430]: I1203 14:16:02.789585 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f08d11be0e2919664ff2ea4b2440d0e0/startup-monitor/0.log" Dec 03 14:16:02.790679 master-0 kubenswrapper[4430]: I1203 14:16:02.789798 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:16:02.888115 master-0 kubenswrapper[4430]: I1203 14:16:02.888059 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f08d11be0e2919664ff2ea4b2440d0e0/startup-monitor/0.log" Dec 03 14:16:02.888346 master-0 kubenswrapper[4430]: I1203 14:16:02.888155 4430 generic.go:334] "Generic (PLEG): container finished" podID="f08d11be0e2919664ff2ea4b2440d0e0" containerID="9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde" exitCode=137 Dec 03 14:16:02.888346 master-0 kubenswrapper[4430]: I1203 14:16:02.888238 4430 scope.go:117] "RemoveContainer" containerID="9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde" Dec 03 14:16:02.888346 master-0 kubenswrapper[4430]: I1203 14:16:02.888270 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:16:02.909663 master-0 kubenswrapper[4430]: I1203 14:16:02.909569 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir\") pod \"f08d11be0e2919664ff2ea4b2440d0e0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " Dec 03 14:16:02.909663 master-0 kubenswrapper[4430]: I1203 14:16:02.909629 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir\") pod \"f08d11be0e2919664ff2ea4b2440d0e0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " Dec 03 14:16:02.909663 master-0 kubenswrapper[4430]: I1203 14:16:02.909657 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests\") pod \"f08d11be0e2919664ff2ea4b2440d0e0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " Dec 03 14:16:02.910640 master-0 kubenswrapper[4430]: I1203 14:16:02.909746 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock\") pod \"f08d11be0e2919664ff2ea4b2440d0e0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " Dec 03 14:16:02.910640 master-0 kubenswrapper[4430]: I1203 14:16:02.909797 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log\") pod \"f08d11be0e2919664ff2ea4b2440d0e0\" (UID: \"f08d11be0e2919664ff2ea4b2440d0e0\") " Dec 03 14:16:02.910640 master-0 kubenswrapper[4430]: I1203 14:16:02.910388 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log" (OuterVolumeSpecName: "var-log") pod "f08d11be0e2919664ff2ea4b2440d0e0" (UID: "f08d11be0e2919664ff2ea4b2440d0e0"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:16:02.911149 master-0 kubenswrapper[4430]: I1203 14:16:02.910847 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock" (OuterVolumeSpecName: "var-lock") pod "f08d11be0e2919664ff2ea4b2440d0e0" (UID: "f08d11be0e2919664ff2ea4b2440d0e0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:16:02.911450 master-0 kubenswrapper[4430]: I1203 14:16:02.910893 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests" (OuterVolumeSpecName: "manifests") pod "f08d11be0e2919664ff2ea4b2440d0e0" (UID: "f08d11be0e2919664ff2ea4b2440d0e0"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:16:02.911529 master-0 kubenswrapper[4430]: I1203 14:16:02.910988 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f08d11be0e2919664ff2ea4b2440d0e0" (UID: "f08d11be0e2919664ff2ea4b2440d0e0"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:16:02.914637 master-0 kubenswrapper[4430]: I1203 14:16:02.914576 4430 scope.go:117] "RemoveContainer" containerID="9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde" Dec 03 14:16:02.915456 master-0 kubenswrapper[4430]: E1203 14:16:02.915344 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde\": container with ID starting with 9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde not found: ID does not exist" containerID="9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde" Dec 03 14:16:02.915568 master-0 kubenswrapper[4430]: I1203 14:16:02.915510 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde"} err="failed to get container status \"9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde\": rpc error: code = NotFound desc = could not find container \"9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde\": container with ID starting with 9806ac0cd42049c8d8e8e6eff63e309e30cc211125a472baa6fe345e09786cde not found: ID does not exist" Dec 03 14:16:02.915859 master-0 kubenswrapper[4430]: I1203 14:16:02.915794 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f08d11be0e2919664ff2ea4b2440d0e0" (UID: "f08d11be0e2919664ff2ea4b2440d0e0"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:16:03.011810 master-0 kubenswrapper[4430]: I1203 14:16:03.011679 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:03.011810 master-0 kubenswrapper[4430]: I1203 14:16:03.011724 4430 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-var-log\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:03.011810 master-0 kubenswrapper[4430]: I1203 14:16:03.011737 4430 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:03.011810 master-0 kubenswrapper[4430]: I1203 14:16:03.011750 4430 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:03.011810 master-0 kubenswrapper[4430]: I1203 14:16:03.011758 4430 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f08d11be0e2919664ff2ea4b2440d0e0-manifests\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:03.597148 master-0 kubenswrapper[4430]: I1203 14:16:03.597063 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f08d11be0e2919664ff2ea4b2440d0e0" path="/var/lib/kubelet/pods/f08d11be0e2919664ff2ea4b2440d0e0/volumes" Dec 03 14:16:03.597594 master-0 kubenswrapper[4430]: I1203 14:16:03.597550 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Dec 03 14:16:03.618853 master-0 kubenswrapper[4430]: I1203 14:16:03.618780 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:16:03.618853 master-0 kubenswrapper[4430]: I1203 14:16:03.618828 4430 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="7509fa41-51fc-4d28-bca2-771558ea7eea" Dec 03 14:16:03.624878 master-0 kubenswrapper[4430]: I1203 14:16:03.624807 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:16:03.624878 master-0 kubenswrapper[4430]: I1203 14:16:03.624867 4430 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="7509fa41-51fc-4d28-bca2-771558ea7eea" Dec 03 14:16:21.236680 master-0 kubenswrapper[4430]: I1203 14:16:21.236600 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237057 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" containerID="cri-o://3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" gracePeriod=600 Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237092 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy" containerID="cri-o://a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" gracePeriod=600 Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237218 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-web" containerID="cri-o://9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" gracePeriod=600 Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237258 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-thanos" containerID="cri-o://7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" gracePeriod=600 Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237279 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="thanos-sidecar" containerID="cri-o://f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" gracePeriod=600 Dec 03 14:16:21.237580 master-0 kubenswrapper[4430]: I1203 14:16:21.237331 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="config-reloader" containerID="cri-o://42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" gracePeriod=600 Dec 03 14:16:21.787277 master-0 kubenswrapper[4430]: I1203 14:16:21.787241 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:21.961493 master-0 kubenswrapper[4430]: I1203 14:16:21.961445 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.961871 master-0 kubenswrapper[4430]: I1203 14:16:21.961844 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962050 master-0 kubenswrapper[4430]: I1203 14:16:21.962013 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962184 master-0 kubenswrapper[4430]: I1203 14:16:21.962167 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962305 master-0 kubenswrapper[4430]: I1203 14:16:21.962287 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962444 master-0 kubenswrapper[4430]: I1203 14:16:21.962412 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962664 master-0 kubenswrapper[4430]: I1203 14:16:21.962644 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962788 master-0 kubenswrapper[4430]: I1203 14:16:21.962772 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.962908 master-0 kubenswrapper[4430]: I1203 14:16:21.962891 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963055 master-0 kubenswrapper[4430]: I1203 14:16:21.963037 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963174 master-0 kubenswrapper[4430]: I1203 14:16:21.963156 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963375 master-0 kubenswrapper[4430]: I1203 14:16:21.963355 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963490 master-0 kubenswrapper[4430]: I1203 14:16:21.962571 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:21.963565 master-0 kubenswrapper[4430]: I1203 14:16:21.963552 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963664 master-0 kubenswrapper[4430]: I1203 14:16:21.963651 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963773 master-0 kubenswrapper[4430]: I1203 14:16:21.963760 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.963960 master-0 kubenswrapper[4430]: I1203 14:16:21.963945 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.964544 master-0 kubenswrapper[4430]: I1203 14:16:21.964524 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.964758 master-0 kubenswrapper[4430]: I1203 14:16:21.964741 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"6cfc08c2-f287-40b8-bf28-4f884595e93c\" (UID: \"6cfc08c2-f287-40b8-bf28-4f884595e93c\") " Dec 03 14:16:21.965097 master-0 kubenswrapper[4430]: I1203 14:16:21.964128 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:21.965097 master-0 kubenswrapper[4430]: I1203 14:16:21.964226 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:21.965097 master-0 kubenswrapper[4430]: I1203 14:16:21.964403 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:21.965684 master-0 kubenswrapper[4430]: I1203 14:16:21.965643 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out" (OuterVolumeSpecName: "config-out") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:16:21.965894 master-0 kubenswrapper[4430]: I1203 14:16:21.965873 4430 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:21.966010 master-0 kubenswrapper[4430]: I1203 14:16:21.965994 4430 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:21.966189 master-0 kubenswrapper[4430]: I1203 14:16:21.966175 4430 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-config-out\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:21.966305 master-0 kubenswrapper[4430]: I1203 14:16:21.966291 4430 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:21.966491 master-0 kubenswrapper[4430]: I1203 14:16:21.966475 4430 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:21.966601 master-0 kubenswrapper[4430]: I1203 14:16:21.966193 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:16:21.967328 master-0 kubenswrapper[4430]: I1203 14:16:21.967280 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:21.967546 master-0 kubenswrapper[4430]: I1203 14:16:21.967512 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.967978 master-0 kubenswrapper[4430]: I1203 14:16:21.967941 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.968655 master-0 kubenswrapper[4430]: I1203 14:16:21.968557 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.968741 master-0 kubenswrapper[4430]: I1203 14:16:21.968582 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config" (OuterVolumeSpecName: "config") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.968838 master-0 kubenswrapper[4430]: I1203 14:16:21.968793 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.969551 master-0 kubenswrapper[4430]: I1203 14:16:21.969506 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.969627 master-0 kubenswrapper[4430]: I1203 14:16:21.969559 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv" (OuterVolumeSpecName: "kube-api-access-hxscv") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "kube-api-access-hxscv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:16:21.969834 master-0 kubenswrapper[4430]: I1203 14:16:21.969777 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.969940 master-0 kubenswrapper[4430]: I1203 14:16:21.969894 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:21.970721 master-0 kubenswrapper[4430]: I1203 14:16:21.970695 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:16:22.015929 master-0 kubenswrapper[4430]: I1203 14:16:22.015798 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config" (OuterVolumeSpecName: "web-config") pod "6cfc08c2-f287-40b8-bf28-4f884595e93c" (UID: "6cfc08c2-f287-40b8-bf28-4f884595e93c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038119 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" exitCode=0 Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038156 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" exitCode=0 Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038166 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" exitCode=0 Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038174 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" exitCode=0 Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038180 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" exitCode=0 Dec 03 14:16:22.038176 master-0 kubenswrapper[4430]: I1203 14:16:22.038187 4430 generic.go:334] "Generic (PLEG): container finished" podID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" exitCode=0 Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038209 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038246 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038282 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038294 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038303 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038313 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038321 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038329 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6cfc08c2-f287-40b8-bf28-4f884595e93c","Type":"ContainerDied","Data":"b86c403eae06b2512d9ab63dd5d7ee4c866d04452f503f67206ce9e9cd5551d3"} Dec 03 14:16:22.038690 master-0 kubenswrapper[4430]: I1203 14:16:22.038345 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.063442 master-0 kubenswrapper[4430]: I1203 14:16:22.063376 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.070899 master-0 kubenswrapper[4430]: I1203 14:16:22.070856 4430 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.070899 master-0 kubenswrapper[4430]: I1203 14:16:22.070900 4430 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070914 4430 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070932 4430 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-tls-assets\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070945 4430 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070963 4430 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6cfc08c2-f287-40b8-bf28-4f884595e93c-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070983 4430 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.070996 4430 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-web-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.071009 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxscv\" (UniqueName: \"kubernetes.io/projected/6cfc08c2-f287-40b8-bf28-4f884595e93c-kube-api-access-hxscv\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.071022 4430 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.071032 4430 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.071046 4430 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.071090 master-0 kubenswrapper[4430]: I1203 14:16:22.071059 4430 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6cfc08c2-f287-40b8-bf28-4f884595e93c-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:22.088729 master-0 kubenswrapper[4430]: I1203 14:16:22.087562 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.099663 master-0 kubenswrapper[4430]: I1203 14:16:22.099601 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:22.106510 master-0 kubenswrapper[4430]: I1203 14:16:22.106182 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:22.115760 master-0 kubenswrapper[4430]: I1203 14:16:22.115716 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.140993 master-0 kubenswrapper[4430]: I1203 14:16:22.140952 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.149881 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150375 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150451 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150469 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" containerName="collect-profiles" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150476 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" containerName="collect-profiles" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150483 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150489 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150496 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="thanos-sidecar" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150502 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="thanos-sidecar" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150515 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f08d11be0e2919664ff2ea4b2440d0e0" containerName="startup-monitor" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150522 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f08d11be0e2919664ff2ea4b2440d0e0" containerName="startup-monitor" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150534 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150543 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150555 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3165c60b-3cd2-4bda-8c55-aecf00bef18d" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150560 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="3165c60b-3cd2-4bda-8c55-aecf00bef18d" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150568 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="init-config-reloader" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150577 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="init-config-reloader" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150589 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150594 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150608 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-web" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150614 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-web" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150621 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="config-reloader" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150628 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="config-reloader" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: E1203 14:16:22.150647 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-thanos" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150652 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-thanos" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150825 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5db1386-71f6-4b27-b686-5a3bb35659fa" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150837 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="1933a6a0-6ccc-4629-a0e5-a5a4b4575771" containerName="collect-profiles" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150845 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="thanos-sidecar" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150853 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="3165c60b-3cd2-4bda-8c55-aecf00bef18d" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150862 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8" containerName="installer" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150871 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="f08d11be0e2919664ff2ea4b2440d0e0" containerName="startup-monitor" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150879 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150886 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-thanos" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150899 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="kube-rbac-proxy-web" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150908 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="config-reloader" Dec 03 14:16:22.151377 master-0 kubenswrapper[4430]: I1203 14:16:22.150964 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" containerName="prometheus" Dec 03 14:16:22.156512 master-0 kubenswrapper[4430]: I1203 14:16:22.153874 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.160192 master-0 kubenswrapper[4430]: I1203 14:16:22.160137 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wksjv" Dec 03 14:16:22.160192 master-0 kubenswrapper[4430]: I1203 14:16:22.160189 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:16:22.160490 master-0 kubenswrapper[4430]: I1203 14:16:22.160344 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:16:22.160581 master-0 kubenswrapper[4430]: I1203 14:16:22.160553 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:16:22.160716 master-0 kubenswrapper[4430]: I1203 14:16:22.160696 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:16:22.160960 master-0 kubenswrapper[4430]: I1203 14:16:22.160919 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:16:22.161308 master-0 kubenswrapper[4430]: I1203 14:16:22.161284 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:16:22.162648 master-0 kubenswrapper[4430]: I1203 14:16:22.162618 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:16:22.164596 master-0 kubenswrapper[4430]: I1203 14:16:22.164571 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:16:22.165352 master-0 kubenswrapper[4430]: I1203 14:16:22.164697 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:16:22.165352 master-0 kubenswrapper[4430]: I1203 14:16:22.164800 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:16:22.165762 master-0 kubenswrapper[4430]: I1203 14:16:22.165734 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:16:22.169223 master-0 kubenswrapper[4430]: I1203 14:16:22.169120 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:22.171296 master-0 kubenswrapper[4430]: I1203 14:16:22.171257 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.171571 master-0 kubenswrapper[4430]: I1203 14:16:22.171531 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172205 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172252 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172278 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172313 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172561 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172648 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172679 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172736 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172806 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172855 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172881 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.172924 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173000 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173043 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173082 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173122 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173167 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.174534 master-0 kubenswrapper[4430]: I1203 14:16:22.173206 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.201882 master-0 kubenswrapper[4430]: I1203 14:16:22.201834 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.217276 master-0 kubenswrapper[4430]: I1203 14:16:22.217224 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.217743 master-0 kubenswrapper[4430]: E1203 14:16:22.217700 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.217821 master-0 kubenswrapper[4430]: I1203 14:16:22.217765 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.217821 master-0 kubenswrapper[4430]: I1203 14:16:22.217806 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.218157 master-0 kubenswrapper[4430]: E1203 14:16:22.218128 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.218249 master-0 kubenswrapper[4430]: I1203 14:16:22.218160 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.218249 master-0 kubenswrapper[4430]: I1203 14:16:22.218198 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.218507 master-0 kubenswrapper[4430]: E1203 14:16:22.218483 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.218592 master-0 kubenswrapper[4430]: I1203 14:16:22.218506 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.218592 master-0 kubenswrapper[4430]: I1203 14:16:22.218521 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.218829 master-0 kubenswrapper[4430]: E1203 14:16:22.218787 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.218891 master-0 kubenswrapper[4430]: I1203 14:16:22.218832 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.218891 master-0 kubenswrapper[4430]: I1203 14:16:22.218864 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.219317 master-0 kubenswrapper[4430]: E1203 14:16:22.219243 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.219398 master-0 kubenswrapper[4430]: I1203 14:16:22.219327 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.219398 master-0 kubenswrapper[4430]: I1203 14:16:22.219366 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.219875 master-0 kubenswrapper[4430]: E1203 14:16:22.219842 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.220372 master-0 kubenswrapper[4430]: I1203 14:16:22.219865 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.220372 master-0 kubenswrapper[4430]: I1203 14:16:22.219918 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.220372 master-0 kubenswrapper[4430]: E1203 14:16:22.220250 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.220372 master-0 kubenswrapper[4430]: I1203 14:16:22.220271 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.220372 master-0 kubenswrapper[4430]: I1203 14:16:22.220285 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.220828 master-0 kubenswrapper[4430]: I1203 14:16:22.220746 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.220913 master-0 kubenswrapper[4430]: I1203 14:16:22.220828 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.221204 master-0 kubenswrapper[4430]: I1203 14:16:22.221180 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.221204 master-0 kubenswrapper[4430]: I1203 14:16:22.221201 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.221901 master-0 kubenswrapper[4430]: I1203 14:16:22.221867 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.221984 master-0 kubenswrapper[4430]: I1203 14:16:22.221919 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.222251 master-0 kubenswrapper[4430]: I1203 14:16:22.222216 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.222251 master-0 kubenswrapper[4430]: I1203 14:16:22.222249 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.222639 master-0 kubenswrapper[4430]: I1203 14:16:22.222603 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.222724 master-0 kubenswrapper[4430]: I1203 14:16:22.222635 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.223459 master-0 kubenswrapper[4430]: I1203 14:16:22.222943 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.223459 master-0 kubenswrapper[4430]: I1203 14:16:22.223038 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.223459 master-0 kubenswrapper[4430]: I1203 14:16:22.223391 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.223459 master-0 kubenswrapper[4430]: I1203 14:16:22.223451 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.223925 master-0 kubenswrapper[4430]: I1203 14:16:22.223856 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.223997 master-0 kubenswrapper[4430]: I1203 14:16:22.223923 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.224278 master-0 kubenswrapper[4430]: I1203 14:16:22.224246 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.224278 master-0 kubenswrapper[4430]: I1203 14:16:22.224270 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.224660 master-0 kubenswrapper[4430]: I1203 14:16:22.224617 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.224660 master-0 kubenswrapper[4430]: I1203 14:16:22.224649 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.224985 master-0 kubenswrapper[4430]: I1203 14:16:22.224944 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.225057 master-0 kubenswrapper[4430]: I1203 14:16:22.224987 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.225379 master-0 kubenswrapper[4430]: I1203 14:16:22.225280 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.225379 master-0 kubenswrapper[4430]: I1203 14:16:22.225306 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.225961 master-0 kubenswrapper[4430]: I1203 14:16:22.225932 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.226045 master-0 kubenswrapper[4430]: I1203 14:16:22.225959 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.226395 master-0 kubenswrapper[4430]: I1203 14:16:22.226353 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.226704 master-0 kubenswrapper[4430]: I1203 14:16:22.226393 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.227014 master-0 kubenswrapper[4430]: I1203 14:16:22.226968 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.227014 master-0 kubenswrapper[4430]: I1203 14:16:22.227001 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.227327 master-0 kubenswrapper[4430]: I1203 14:16:22.227294 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.227380 master-0 kubenswrapper[4430]: I1203 14:16:22.227326 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.227686 master-0 kubenswrapper[4430]: I1203 14:16:22.227659 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.227686 master-0 kubenswrapper[4430]: I1203 14:16:22.227684 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.228059 master-0 kubenswrapper[4430]: I1203 14:16:22.228030 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.228059 master-0 kubenswrapper[4430]: I1203 14:16:22.228054 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.228336 master-0 kubenswrapper[4430]: I1203 14:16:22.228313 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.228412 master-0 kubenswrapper[4430]: I1203 14:16:22.228334 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.228598 master-0 kubenswrapper[4430]: I1203 14:16:22.228575 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.228598 master-0 kubenswrapper[4430]: I1203 14:16:22.228596 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.228899 master-0 kubenswrapper[4430]: I1203 14:16:22.228870 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.228973 master-0 kubenswrapper[4430]: I1203 14:16:22.228898 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.229174 master-0 kubenswrapper[4430]: I1203 14:16:22.229141 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.229251 master-0 kubenswrapper[4430]: I1203 14:16:22.229172 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.229443 master-0 kubenswrapper[4430]: I1203 14:16:22.229403 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.229443 master-0 kubenswrapper[4430]: I1203 14:16:22.229443 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.229672 master-0 kubenswrapper[4430]: I1203 14:16:22.229654 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.229672 master-0 kubenswrapper[4430]: I1203 14:16:22.229672 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.229914 master-0 kubenswrapper[4430]: I1203 14:16:22.229882 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.229987 master-0 kubenswrapper[4430]: I1203 14:16:22.229911 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.230188 master-0 kubenswrapper[4430]: I1203 14:16:22.230164 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.230239 master-0 kubenswrapper[4430]: I1203 14:16:22.230189 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.230452 master-0 kubenswrapper[4430]: I1203 14:16:22.230408 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.230452 master-0 kubenswrapper[4430]: I1203 14:16:22.230450 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.230694 master-0 kubenswrapper[4430]: I1203 14:16:22.230675 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.230745 master-0 kubenswrapper[4430]: I1203 14:16:22.230694 4430 scope.go:117] "RemoveContainer" containerID="7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13" Dec 03 14:16:22.230964 master-0 kubenswrapper[4430]: I1203 14:16:22.230916 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13"} err="failed to get container status \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": rpc error: code = NotFound desc = could not find container \"7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13\": container with ID starting with 7daf016c315ef296d3726d4b79c4ca131dc89bf5d3da4521986ec47e53b39c13 not found: ID does not exist" Dec 03 14:16:22.230964 master-0 kubenswrapper[4430]: I1203 14:16:22.230957 4430 scope.go:117] "RemoveContainer" containerID="a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65" Dec 03 14:16:22.231323 master-0 kubenswrapper[4430]: I1203 14:16:22.231298 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65"} err="failed to get container status \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": rpc error: code = NotFound desc = could not find container \"a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65\": container with ID starting with a75561fb1bf11db84505a7500eab835b58b7982deb63addb789c7600a89f6d65 not found: ID does not exist" Dec 03 14:16:22.231323 master-0 kubenswrapper[4430]: I1203 14:16:22.231318 4430 scope.go:117] "RemoveContainer" containerID="9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5" Dec 03 14:16:22.231642 master-0 kubenswrapper[4430]: I1203 14:16:22.231617 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5"} err="failed to get container status \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": rpc error: code = NotFound desc = could not find container \"9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5\": container with ID starting with 9d5731a3e102b49a84d7a60e8f06f8a5b403aeb2e4f775c38ce8f08d1849b6f5 not found: ID does not exist" Dec 03 14:16:22.231700 master-0 kubenswrapper[4430]: I1203 14:16:22.231644 4430 scope.go:117] "RemoveContainer" containerID="f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098" Dec 03 14:16:22.231873 master-0 kubenswrapper[4430]: I1203 14:16:22.231852 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098"} err="failed to get container status \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": rpc error: code = NotFound desc = could not find container \"f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098\": container with ID starting with f74df050dc84721221b40ffbb284fdb46803669cda812501543b25396316b098 not found: ID does not exist" Dec 03 14:16:22.231924 master-0 kubenswrapper[4430]: I1203 14:16:22.231872 4430 scope.go:117] "RemoveContainer" containerID="42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef" Dec 03 14:16:22.232166 master-0 kubenswrapper[4430]: I1203 14:16:22.232137 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef"} err="failed to get container status \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": rpc error: code = NotFound desc = could not find container \"42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef\": container with ID starting with 42f8c8a4104fea3a4ff97285b96a45745f012c41fc353080b601ba5416af3fef not found: ID does not exist" Dec 03 14:16:22.232263 master-0 kubenswrapper[4430]: I1203 14:16:22.232165 4430 scope.go:117] "RemoveContainer" containerID="3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612" Dec 03 14:16:22.232542 master-0 kubenswrapper[4430]: I1203 14:16:22.232514 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612"} err="failed to get container status \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": rpc error: code = NotFound desc = could not find container \"3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612\": container with ID starting with 3fc3534ead8d029ea057c0d74a716c96f5d83147c9edd212de32f46bf4330612 not found: ID does not exist" Dec 03 14:16:22.232601 master-0 kubenswrapper[4430]: I1203 14:16:22.232544 4430 scope.go:117] "RemoveContainer" containerID="b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10" Dec 03 14:16:22.232788 master-0 kubenswrapper[4430]: I1203 14:16:22.232767 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10"} err="failed to get container status \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": rpc error: code = NotFound desc = could not find container \"b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10\": container with ID starting with b0f55a600fc360b580784f67802203acc14f33725cef20a22e4b6bb1c7b5da10 not found: ID does not exist" Dec 03 14:16:22.275250 master-0 kubenswrapper[4430]: I1203 14:16:22.275093 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.275250 master-0 kubenswrapper[4430]: I1203 14:16:22.275166 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.275250 master-0 kubenswrapper[4430]: I1203 14:16:22.275217 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.275250 master-0 kubenswrapper[4430]: I1203 14:16:22.275242 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275271 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275289 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275310 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275330 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275355 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275374 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275393 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275411 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275444 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275464 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275500 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275517 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275536 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.276816 master-0 kubenswrapper[4430]: I1203 14:16:22.275561 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.278111 master-0 kubenswrapper[4430]: I1203 14:16:22.276902 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.278111 master-0 kubenswrapper[4430]: I1203 14:16:22.276912 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.278111 master-0 kubenswrapper[4430]: I1203 14:16:22.276931 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.278111 master-0 kubenswrapper[4430]: I1203 14:16:22.278011 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.278111 master-0 kubenswrapper[4430]: I1203 14:16:22.278023 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.279799 master-0 kubenswrapper[4430]: I1203 14:16:22.279757 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.280116 master-0 kubenswrapper[4430]: I1203 14:16:22.280072 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.280604 master-0 kubenswrapper[4430]: I1203 14:16:22.280561 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.280820 master-0 kubenswrapper[4430]: I1203 14:16:22.280784 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.281265 master-0 kubenswrapper[4430]: I1203 14:16:22.281228 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.282101 master-0 kubenswrapper[4430]: I1203 14:16:22.281494 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.282101 master-0 kubenswrapper[4430]: I1203 14:16:22.282048 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.282967 master-0 kubenswrapper[4430]: I1203 14:16:22.282916 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.284736 master-0 kubenswrapper[4430]: I1203 14:16:22.284696 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.284886 master-0 kubenswrapper[4430]: I1203 14:16:22.284853 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.285929 master-0 kubenswrapper[4430]: I1203 14:16:22.285896 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.290548 master-0 kubenswrapper[4430]: I1203 14:16:22.290513 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.295153 master-0 kubenswrapper[4430]: I1203 14:16:22.295100 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.487073 master-0 kubenswrapper[4430]: I1203 14:16:22.486992 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:22.917859 master-0 kubenswrapper[4430]: I1203 14:16:22.917803 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 03 14:16:23.046976 master-0 kubenswrapper[4430]: I1203 14:16:23.045966 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"a56e57ee3051a8fa07b55bc7bd5c21821a3acec0b91c9acc5fb6a7bd49b44678"} Dec 03 14:16:23.591923 master-0 kubenswrapper[4430]: I1203 14:16:23.591861 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cfc08c2-f287-40b8-bf28-4f884595e93c" path="/var/lib/kubelet/pods/6cfc08c2-f287-40b8-bf28-4f884595e93c/volumes" Dec 03 14:16:24.057209 master-0 kubenswrapper[4430]: I1203 14:16:24.057119 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="74e726ae8a915e7903adf0b7351cce9dd1148b2738a446e859b837bfe2203280" exitCode=0 Dec 03 14:16:24.057209 master-0 kubenswrapper[4430]: I1203 14:16:24.057219 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"74e726ae8a915e7903adf0b7351cce9dd1148b2738a446e859b837bfe2203280"} Dec 03 14:16:25.070039 master-0 kubenswrapper[4430]: I1203 14:16:25.069977 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"08c73268b9592ac8d03df102a14fe7d51af1e19b4679e409ed49670e07faf1b0"} Dec 03 14:16:25.070039 master-0 kubenswrapper[4430]: I1203 14:16:25.070033 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"459ea4c37a38e3d087f3b8db48d49da05ce0bc01a55b7d42b544294505774666"} Dec 03 14:16:25.070039 master-0 kubenswrapper[4430]: I1203 14:16:25.070043 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"0d8d53a093e5b4a209c70f9f168898ec080129a115dd5a593d75037b55b3d878"} Dec 03 14:16:25.070039 master-0 kubenswrapper[4430]: I1203 14:16:25.070052 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"4f0c760191e6deb4a438d22186cf7870bb999b9485974340a2385e20d2dae583"} Dec 03 14:16:25.070039 master-0 kubenswrapper[4430]: I1203 14:16:25.070061 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1"} Dec 03 14:16:26.083381 master-0 kubenswrapper[4430]: I1203 14:16:26.083325 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"c44fccacafcc89a7f41f51bda645d91507bff12a5ee3a2f34749020e54160bbe"} Dec 03 14:16:26.122662 master-0 kubenswrapper[4430]: I1203 14:16:26.122519 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.122484605 podStartE2EDuration="4.122484605s" podCreationTimestamp="2025-12-03 14:16:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:16:26.116136036 +0000 UTC m=+486.739050122" watchObservedRunningTime="2025-12-03 14:16:26.122484605 +0000 UTC m=+486.745398701" Dec 03 14:16:26.728691 master-0 kubenswrapper[4430]: I1203 14:16:26.728539 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Dec 03 14:16:26.730027 master-0 kubenswrapper[4430]: I1203 14:16:26.729987 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.731620 master-0 kubenswrapper[4430]: I1203 14:16:26.731582 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-4xstp" Dec 03 14:16:26.733298 master-0 kubenswrapper[4430]: I1203 14:16:26.733258 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 14:16:26.739279 master-0 kubenswrapper[4430]: I1203 14:16:26.739221 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Dec 03 14:16:26.872527 master-0 kubenswrapper[4430]: I1203 14:16:26.872413 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.872527 master-0 kubenswrapper[4430]: I1203 14:16:26.872526 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.872905 master-0 kubenswrapper[4430]: I1203 14:16:26.872787 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.974814 master-0 kubenswrapper[4430]: I1203 14:16:26.974746 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.974814 master-0 kubenswrapper[4430]: I1203 14:16:26.974808 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.975232 master-0 kubenswrapper[4430]: I1203 14:16:26.974868 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.975232 master-0 kubenswrapper[4430]: I1203 14:16:26.975015 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:26.975232 master-0 kubenswrapper[4430]: I1203 14:16:26.975065 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:27.000818 master-0 kubenswrapper[4430]: I1203 14:16:27.000664 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:27.053806 master-0 kubenswrapper[4430]: I1203 14:16:27.053696 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:16:27.487956 master-0 kubenswrapper[4430]: I1203 14:16:27.487889 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:16:27.530487 master-0 kubenswrapper[4430]: I1203 14:16:27.529738 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Dec 03 14:16:27.530612 master-0 kubenswrapper[4430]: W1203 14:16:27.530526 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9362c4d7_96a9_4400_b7d3_fd4f196b3d0f.slice/crio-69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311 WatchSource:0}: Error finding container 69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311: Status 404 returned error can't find the container with id 69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311 Dec 03 14:16:28.102047 master-0 kubenswrapper[4430]: I1203 14:16:28.101957 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f","Type":"ContainerStarted","Data":"797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288"} Dec 03 14:16:28.102047 master-0 kubenswrapper[4430]: I1203 14:16:28.102044 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f","Type":"ContainerStarted","Data":"69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311"} Dec 03 14:16:31.622102 master-0 kubenswrapper[4430]: I1203 14:16:31.622039 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:16:31.622102 master-0 kubenswrapper[4430]: I1203 14:16:31.622126 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:16:31.810404 master-0 kubenswrapper[4430]: I1203 14:16:31.810302 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=5.810276362 podStartE2EDuration="5.810276362s" podCreationTimestamp="2025-12-03 14:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:16:28.120179148 +0000 UTC m=+488.743093234" watchObservedRunningTime="2025-12-03 14:16:31.810276362 +0000 UTC m=+492.433190438" Dec 03 14:16:31.812762 master-0 kubenswrapper[4430]: I1203 14:16:31.812701 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Dec 03 14:16:31.813778 master-0 kubenswrapper[4430]: I1203 14:16:31.813748 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:31.815805 master-0 kubenswrapper[4430]: I1203 14:16:31.815779 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Dec 03 14:16:31.828697 master-0 kubenswrapper[4430]: I1203 14:16:31.828635 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Dec 03 14:16:31.982652 master-0 kubenswrapper[4430]: I1203 14:16:31.982576 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:31.982652 master-0 kubenswrapper[4430]: I1203 14:16:31.982660 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:31.982987 master-0 kubenswrapper[4430]: I1203 14:16:31.982829 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.084123 master-0 kubenswrapper[4430]: I1203 14:16:32.084052 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.084123 master-0 kubenswrapper[4430]: I1203 14:16:32.084138 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.084500 master-0 kubenswrapper[4430]: I1203 14:16:32.084185 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.084500 master-0 kubenswrapper[4430]: I1203 14:16:32.084237 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.084500 master-0 kubenswrapper[4430]: I1203 14:16:32.084279 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.213765 master-0 kubenswrapper[4430]: I1203 14:16:32.200878 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.476998 master-0 kubenswrapper[4430]: I1203 14:16:32.476883 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:16:32.900009 master-0 kubenswrapper[4430]: I1203 14:16:32.899938 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Dec 03 14:16:32.905164 master-0 kubenswrapper[4430]: W1203 14:16:32.905074 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f539ea7_39a7_422f_82d9_7603eede84cf.slice/crio-b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237 WatchSource:0}: Error finding container b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237: Status 404 returned error can't find the container with id b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237 Dec 03 14:16:33.206447 master-0 kubenswrapper[4430]: I1203 14:16:33.206218 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"3f539ea7-39a7-422f-82d9-7603eede84cf","Type":"ContainerStarted","Data":"b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237"} Dec 03 14:16:34.215348 master-0 kubenswrapper[4430]: I1203 14:16:34.215294 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"3f539ea7-39a7-422f-82d9-7603eede84cf","Type":"ContainerStarted","Data":"8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3"} Dec 03 14:16:34.245335 master-0 kubenswrapper[4430]: I1203 14:16:34.245213 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-retry-1-master-0" podStartSLOduration=3.245191979 podStartE2EDuration="3.245191979s" podCreationTimestamp="2025-12-03 14:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:16:34.239702604 +0000 UTC m=+494.862616690" watchObservedRunningTime="2025-12-03 14:16:34.245191979 +0000 UTC m=+494.868106055" Dec 03 14:16:34.926539 master-0 kubenswrapper[4430]: I1203 14:16:34.926463 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:34.927146 master-0 kubenswrapper[4430]: I1203 14:16:34.927074 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-metric" containerID="cri-o://27cdbfb2504c37ed6726323d5485c0e2e22b89f61a186620fdadabff753518bf" gracePeriod=120 Dec 03 14:16:34.927268 master-0 kubenswrapper[4430]: I1203 14:16:34.927190 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="config-reloader" containerID="cri-o://2b8e810972222c8e8af4262192b9de6b2880ece9c7675e0e85c1ef0fb73b69e3" gracePeriod=120 Dec 03 14:16:34.927336 master-0 kubenswrapper[4430]: I1203 14:16:34.927216 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy" containerID="cri-o://a30f66b778c323b468e53a0d452494721896105089b8f0bde3f60b49ba83e072" gracePeriod=120 Dec 03 14:16:34.927404 master-0 kubenswrapper[4430]: I1203 14:16:34.927187 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-web" containerID="cri-o://a9e6448c1ed22d7af273c868f2b82ebb2cd877ea3652f571176e7c3960d01c77" gracePeriod=120 Dec 03 14:16:34.927490 master-0 kubenswrapper[4430]: I1203 14:16:34.927195 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="prom-label-proxy" containerID="cri-o://233eca8d43479e016f98704c0539e9fe320cd7a7c4ee637c4f56e040b2892a72" gracePeriod=120 Dec 03 14:16:34.927561 master-0 kubenswrapper[4430]: I1203 14:16:34.926962 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="alertmanager" containerID="cri-o://d66d8bfaeb58fed7f04b65ebb52929c7756d51c1efb8fd952bccd9549c975a8f" gracePeriod=120 Dec 03 14:16:35.227477 master-0 kubenswrapper[4430]: I1203 14:16:35.227371 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="233eca8d43479e016f98704c0539e9fe320cd7a7c4ee637c4f56e040b2892a72" exitCode=0 Dec 03 14:16:35.227477 master-0 kubenswrapper[4430]: I1203 14:16:35.227410 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="27cdbfb2504c37ed6726323d5485c0e2e22b89f61a186620fdadabff753518bf" exitCode=0 Dec 03 14:16:35.227477 master-0 kubenswrapper[4430]: I1203 14:16:35.227457 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"233eca8d43479e016f98704c0539e9fe320cd7a7c4ee637c4f56e040b2892a72"} Dec 03 14:16:35.227477 master-0 kubenswrapper[4430]: I1203 14:16:35.227487 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="a30f66b778c323b468e53a0d452494721896105089b8f0bde3f60b49ba83e072" exitCode=0 Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227501 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="a9e6448c1ed22d7af273c868f2b82ebb2cd877ea3652f571176e7c3960d01c77" exitCode=0 Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227509 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="2b8e810972222c8e8af4262192b9de6b2880ece9c7675e0e85c1ef0fb73b69e3" exitCode=0 Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227509 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"27cdbfb2504c37ed6726323d5485c0e2e22b89f61a186620fdadabff753518bf"} Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227527 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"a30f66b778c323b468e53a0d452494721896105089b8f0bde3f60b49ba83e072"} Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227539 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"a9e6448c1ed22d7af273c868f2b82ebb2cd877ea3652f571176e7c3960d01c77"} Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227551 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"2b8e810972222c8e8af4262192b9de6b2880ece9c7675e0e85c1ef0fb73b69e3"} Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227564 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"d66d8bfaeb58fed7f04b65ebb52929c7756d51c1efb8fd952bccd9549c975a8f"} Dec 03 14:16:35.228251 master-0 kubenswrapper[4430]: I1203 14:16:35.227517 4430 generic.go:334] "Generic (PLEG): container finished" podID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerID="d66d8bfaeb58fed7f04b65ebb52929c7756d51c1efb8fd952bccd9549c975a8f" exitCode=0 Dec 03 14:16:35.441301 master-0 kubenswrapper[4430]: I1203 14:16:35.441243 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:35.546626 master-0 kubenswrapper[4430]: I1203 14:16:35.546511 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546626 master-0 kubenswrapper[4430]: I1203 14:16:35.546603 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546658 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546694 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546759 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546813 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546858 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.546881 master-0 kubenswrapper[4430]: I1203 14:16:35.546883 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.547172 master-0 kubenswrapper[4430]: I1203 14:16:35.546916 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.547172 master-0 kubenswrapper[4430]: I1203 14:16:35.546944 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.547172 master-0 kubenswrapper[4430]: I1203 14:16:35.546968 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.547172 master-0 kubenswrapper[4430]: I1203 14:16:35.546988 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") pod \"ff21a9a5-706f-4c71-bd0c-5586374f819a\" (UID: \"ff21a9a5-706f-4c71-bd0c-5586374f819a\") " Dec 03 14:16:35.547380 master-0 kubenswrapper[4430]: I1203 14:16:35.547338 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:16:35.547813 master-0 kubenswrapper[4430]: I1203 14:16:35.547757 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:35.547879 master-0 kubenswrapper[4430]: I1203 14:16:35.547786 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:16:35.550748 master-0 kubenswrapper[4430]: I1203 14:16:35.550696 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.550821 master-0 kubenswrapper[4430]: I1203 14:16:35.550735 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out" (OuterVolumeSpecName: "config-out") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:16:35.550821 master-0 kubenswrapper[4430]: I1203 14:16:35.550804 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.551287 master-0 kubenswrapper[4430]: I1203 14:16:35.551240 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.551426 master-0 kubenswrapper[4430]: I1203 14:16:35.551401 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7" (OuterVolumeSpecName: "kube-api-access-52zj7") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "kube-api-access-52zj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:16:35.551753 master-0 kubenswrapper[4430]: I1203 14:16:35.551720 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume" (OuterVolumeSpecName: "config-volume") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.551806 master-0 kubenswrapper[4430]: I1203 14:16:35.551711 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:16:35.551908 master-0 kubenswrapper[4430]: I1203 14:16:35.551871 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.598377 master-0 kubenswrapper[4430]: I1203 14:16:35.598316 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config" (OuterVolumeSpecName: "web-config") pod "ff21a9a5-706f-4c71-bd0c-5586374f819a" (UID: "ff21a9a5-706f-4c71-bd0c-5586374f819a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:16:35.658277 master-0 kubenswrapper[4430]: I1203 14:16:35.658208 4430 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-web-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658542 master-0 kubenswrapper[4430]: I1203 14:16:35.658290 4430 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658542 master-0 kubenswrapper[4430]: I1203 14:16:35.658350 4430 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658542 master-0 kubenswrapper[4430]: I1203 14:16:35.658493 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52zj7\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-kube-api-access-52zj7\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658813 master-0 kubenswrapper[4430]: I1203 14:16:35.658787 4430 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ff21a9a5-706f-4c71-bd0c-5586374f819a-tls-assets\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658813 master-0 kubenswrapper[4430]: I1203 14:16:35.658813 4430 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658823 4430 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658835 4430 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658857 4430 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-config-out\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658872 4430 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/ff21a9a5-706f-4c71-bd0c-5586374f819a-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658884 4430 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/ff21a9a5-706f-4c71-bd0c-5586374f819a-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:35.658891 master-0 kubenswrapper[4430]: I1203 14:16:35.658894 4430 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ff21a9a5-706f-4c71-bd0c-5586374f819a-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:16:36.239852 master-0 kubenswrapper[4430]: I1203 14:16:36.239789 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"ff21a9a5-706f-4c71-bd0c-5586374f819a","Type":"ContainerDied","Data":"ec5722f5529eb51f3812e83ffa10d076433aba61390ef045b51dbc13c084feab"} Dec 03 14:16:36.240603 master-0 kubenswrapper[4430]: I1203 14:16:36.239874 4430 scope.go:117] "RemoveContainer" containerID="233eca8d43479e016f98704c0539e9fe320cd7a7c4ee637c4f56e040b2892a72" Dec 03 14:16:36.240603 master-0 kubenswrapper[4430]: I1203 14:16:36.239898 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.256630 master-0 kubenswrapper[4430]: I1203 14:16:36.256595 4430 scope.go:117] "RemoveContainer" containerID="27cdbfb2504c37ed6726323d5485c0e2e22b89f61a186620fdadabff753518bf" Dec 03 14:16:36.269730 master-0 kubenswrapper[4430]: I1203 14:16:36.269670 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:36.276269 master-0 kubenswrapper[4430]: I1203 14:16:36.276212 4430 scope.go:117] "RemoveContainer" containerID="a30f66b778c323b468e53a0d452494721896105089b8f0bde3f60b49ba83e072" Dec 03 14:16:36.290658 master-0 kubenswrapper[4430]: I1203 14:16:36.290587 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:36.295913 master-0 kubenswrapper[4430]: I1203 14:16:36.295878 4430 scope.go:117] "RemoveContainer" containerID="a9e6448c1ed22d7af273c868f2b82ebb2cd877ea3652f571176e7c3960d01c77" Dec 03 14:16:36.315199 master-0 kubenswrapper[4430]: I1203 14:16:36.315151 4430 scope.go:117] "RemoveContainer" containerID="2b8e810972222c8e8af4262192b9de6b2880ece9c7675e0e85c1ef0fb73b69e3" Dec 03 14:16:36.331868 master-0 kubenswrapper[4430]: I1203 14:16:36.331835 4430 scope.go:117] "RemoveContainer" containerID="d66d8bfaeb58fed7f04b65ebb52929c7756d51c1efb8fd952bccd9549c975a8f" Dec 03 14:16:36.427995 master-0 kubenswrapper[4430]: I1203 14:16:36.427949 4430 scope.go:117] "RemoveContainer" containerID="06d9e21dc5026b2ff8ed704174dd5237e899ac63961af3e8259a610d962004eb" Dec 03 14:16:36.431370 master-0 kubenswrapper[4430]: I1203 14:16:36.431320 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:36.431685 master-0 kubenswrapper[4430]: E1203 14:16:36.431656 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="prom-label-proxy" Dec 03 14:16:36.431685 master-0 kubenswrapper[4430]: I1203 14:16:36.431687 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="prom-label-proxy" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431708 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431714 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431726 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="init-config-reloader" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431734 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="init-config-reloader" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431743 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-metric" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431749 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-metric" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431759 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-web" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431765 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-web" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431773 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="alertmanager" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431779 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="alertmanager" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: E1203 14:16:36.431787 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="config-reloader" Dec 03 14:16:36.431853 master-0 kubenswrapper[4430]: I1203 14:16:36.431793 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="config-reloader" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431932 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-metric" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431942 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy-web" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431949 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="kube-rbac-proxy" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431959 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="prom-label-proxy" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431967 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="alertmanager" Dec 03 14:16:36.432727 master-0 kubenswrapper[4430]: I1203 14:16:36.431977 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" containerName="config-reloader" Dec 03 14:16:36.434285 master-0 kubenswrapper[4430]: I1203 14:16:36.434249 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.438511 master-0 kubenswrapper[4430]: I1203 14:16:36.437765 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:16:36.438511 master-0 kubenswrapper[4430]: I1203 14:16:36.437830 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:16:36.438511 master-0 kubenswrapper[4430]: I1203 14:16:36.437994 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:16:36.438511 master-0 kubenswrapper[4430]: I1203 14:16:36.438242 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:16:36.441047 master-0 kubenswrapper[4430]: I1203 14:16:36.440954 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:16:36.441268 master-0 kubenswrapper[4430]: I1203 14:16:36.441143 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:16:36.441268 master-0 kubenswrapper[4430]: I1203 14:16:36.441178 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:16:36.443685 master-0 kubenswrapper[4430]: I1203 14:16:36.443620 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-vpms9" Dec 03 14:16:36.447261 master-0 kubenswrapper[4430]: I1203 14:16:36.447226 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:16:36.491468 master-0 kubenswrapper[4430]: I1203 14:16:36.456156 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:36.613244 master-0 kubenswrapper[4430]: I1203 14:16:36.613023 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613244 master-0 kubenswrapper[4430]: I1203 14:16:36.613093 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613244 master-0 kubenswrapper[4430]: I1203 14:16:36.613162 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613244 master-0 kubenswrapper[4430]: I1203 14:16:36.613178 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613244 master-0 kubenswrapper[4430]: I1203 14:16:36.613199 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613911 master-0 kubenswrapper[4430]: I1203 14:16:36.613284 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613911 master-0 kubenswrapper[4430]: I1203 14:16:36.613321 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613911 master-0 kubenswrapper[4430]: I1203 14:16:36.613864 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.613911 master-0 kubenswrapper[4430]: I1203 14:16:36.613903 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.614262 master-0 kubenswrapper[4430]: I1203 14:16:36.613925 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.614262 master-0 kubenswrapper[4430]: I1203 14:16:36.613998 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.614262 master-0 kubenswrapper[4430]: I1203 14:16:36.614211 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.715468 master-0 kubenswrapper[4430]: I1203 14:16:36.715388 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.715468 master-0 kubenswrapper[4430]: I1203 14:16:36.715487 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715549 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715581 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715608 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715634 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715661 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715685 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715706 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715724 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715745 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.716661 master-0 kubenswrapper[4430]: I1203 14:16:36.715770 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.717130 master-0 kubenswrapper[4430]: I1203 14:16:36.716789 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.717130 master-0 kubenswrapper[4430]: I1203 14:16:36.717065 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.717345 master-0 kubenswrapper[4430]: I1203 14:16:36.717304 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.720831 master-0 kubenswrapper[4430]: I1203 14:16:36.720792 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.720831 master-0 kubenswrapper[4430]: I1203 14:16:36.720826 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.721193 master-0 kubenswrapper[4430]: I1203 14:16:36.721134 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.722032 master-0 kubenswrapper[4430]: I1203 14:16:36.721994 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.722108 master-0 kubenswrapper[4430]: I1203 14:16:36.722043 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.722241 master-0 kubenswrapper[4430]: I1203 14:16:36.722209 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.728196 master-0 kubenswrapper[4430]: I1203 14:16:36.728135 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.728756 master-0 kubenswrapper[4430]: I1203 14:16:36.728718 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.743168 master-0 kubenswrapper[4430]: I1203 14:16:36.743066 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:36.842361 master-0 kubenswrapper[4430]: I1203 14:16:36.842272 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:16:37.245066 master-0 kubenswrapper[4430]: I1203 14:16:37.245005 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 03 14:16:37.247377 master-0 kubenswrapper[4430]: W1203 14:16:37.247348 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d838c1a_22e2_4096_9739_7841ef7d06ba.slice/crio-58583afa3d6e1e90dc7bc371b14dc1a87a0e9c836c1f67221629664b68724db5 WatchSource:0}: Error finding container 58583afa3d6e1e90dc7bc371b14dc1a87a0e9c836c1f67221629664b68724db5: Status 404 returned error can't find the container with id 58583afa3d6e1e90dc7bc371b14dc1a87a0e9c836c1f67221629664b68724db5 Dec 03 14:16:37.592209 master-0 kubenswrapper[4430]: I1203 14:16:37.592129 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff21a9a5-706f-4c71-bd0c-5586374f819a" path="/var/lib/kubelet/pods/ff21a9a5-706f-4c71-bd0c-5586374f819a/volumes" Dec 03 14:16:38.260267 master-0 kubenswrapper[4430]: I1203 14:16:38.260192 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="0cf77faeeaaafe59dfe31a68931e095a146e9a60506f75439ca9134242fcde25" exitCode=0 Dec 03 14:16:38.261091 master-0 kubenswrapper[4430]: I1203 14:16:38.260267 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"0cf77faeeaaafe59dfe31a68931e095a146e9a60506f75439ca9134242fcde25"} Dec 03 14:16:38.261091 master-0 kubenswrapper[4430]: I1203 14:16:38.260346 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"58583afa3d6e1e90dc7bc371b14dc1a87a0e9c836c1f67221629664b68724db5"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270596 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"665ad677aaa74ebbffda03136c226e5418f7caeb08ac0c3d1e8ee4e37c185fd5"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270675 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"51da4cadfe494abf9d71139ebc5ee02a538f34346eee9d660a0740d99613a7ee"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270686 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"acab1430bd92b580220f13247b2a261291dbaed1747728645f8c995585d74972"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270696 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"cd94bbbc8c57e2e3fade7ecec5038282abb27807ec06c1f19cbee3b4e3fc8fb3"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270706 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"b64f5f125b688edf175adfcffc5124baddf49d5600e593c50b76523fed1745a9"} Dec 03 14:16:39.270688 master-0 kubenswrapper[4430]: I1203 14:16:39.270714 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"b6466aa546730021e5613ad7d04a35e0e9f305a56aebdaaa8f3e64231dd4ef70"} Dec 03 14:16:39.313973 master-0 kubenswrapper[4430]: I1203 14:16:39.313887 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.313869576 podStartE2EDuration="3.313869576s" podCreationTimestamp="2025-12-03 14:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:16:39.306859258 +0000 UTC m=+499.929773354" watchObservedRunningTime="2025-12-03 14:16:39.313869576 +0000 UTC m=+499.936783652" Dec 03 14:16:49.580671 master-0 kubenswrapper[4430]: I1203 14:16:49.580590 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:16:49.581986 master-0 kubenswrapper[4430]: I1203 14:16:49.581948 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.628725 master-0 kubenswrapper[4430]: I1203 14:16:49.628665 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.628725 master-0 kubenswrapper[4430]: I1203 14:16:49.628728 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.629043 master-0 kubenswrapper[4430]: I1203 14:16:49.628981 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.629187 master-0 kubenswrapper[4430]: I1203 14:16:49.629117 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.629276 master-0 kubenswrapper[4430]: I1203 14:16:49.629256 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.629412 master-0 kubenswrapper[4430]: I1203 14:16:49.629352 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.629493 master-0 kubenswrapper[4430]: I1203 14:16:49.629447 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.631970 master-0 kubenswrapper[4430]: I1203 14:16:49.631864 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:16:49.730802 master-0 kubenswrapper[4430]: I1203 14:16:49.730734 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731071 master-0 kubenswrapper[4430]: I1203 14:16:49.730817 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731071 master-0 kubenswrapper[4430]: I1203 14:16:49.730857 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731071 master-0 kubenswrapper[4430]: I1203 14:16:49.730929 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731071 master-0 kubenswrapper[4430]: I1203 14:16:49.730952 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731293 master-0 kubenswrapper[4430]: I1203 14:16:49.731088 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.731293 master-0 kubenswrapper[4430]: I1203 14:16:49.731133 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.732267 master-0 kubenswrapper[4430]: I1203 14:16:49.731748 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.732267 master-0 kubenswrapper[4430]: I1203 14:16:49.732053 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.733304 master-0 kubenswrapper[4430]: I1203 14:16:49.733263 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.734558 master-0 kubenswrapper[4430]: I1203 14:16:49.734523 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.734859 master-0 kubenswrapper[4430]: I1203 14:16:49.734825 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.735501 master-0 kubenswrapper[4430]: I1203 14:16:49.735464 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.825115 master-0 kubenswrapper[4430]: I1203 14:16:49.825043 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:49.913040 master-0 kubenswrapper[4430]: I1203 14:16:49.912857 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:50.411351 master-0 kubenswrapper[4430]: I1203 14:16:50.411260 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:16:50.411638 master-0 kubenswrapper[4430]: W1203 14:16:50.411600 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b442f72_b113_4227_93b5_ea1ae90d5154.slice/crio-2519d3ef01296d55fa75e695c79534fd34999eadd314c27ee5517bcf1ac005ff WatchSource:0}: Error finding container 2519d3ef01296d55fa75e695c79534fd34999eadd314c27ee5517bcf1ac005ff: Status 404 returned error can't find the container with id 2519d3ef01296d55fa75e695c79534fd34999eadd314c27ee5517bcf1ac005ff Dec 03 14:16:51.361825 master-0 kubenswrapper[4430]: I1203 14:16:51.361758 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerStarted","Data":"7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae"} Dec 03 14:16:51.361825 master-0 kubenswrapper[4430]: I1203 14:16:51.361814 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerStarted","Data":"2519d3ef01296d55fa75e695c79534fd34999eadd314c27ee5517bcf1ac005ff"} Dec 03 14:16:51.390287 master-0 kubenswrapper[4430]: I1203 14:16:51.390177 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c9c84854-xf7nv" podStartSLOduration=2.3901431029999998 podStartE2EDuration="2.390143103s" podCreationTimestamp="2025-12-03 14:16:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:16:51.3868551 +0000 UTC m=+512.009769196" watchObservedRunningTime="2025-12-03 14:16:51.390143103 +0000 UTC m=+512.013057179" Dec 03 14:16:59.913407 master-0 kubenswrapper[4430]: I1203 14:16:59.913335 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:59.913407 master-0 kubenswrapper[4430]: I1203 14:16:59.913413 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:16:59.918741 master-0 kubenswrapper[4430]: I1203 14:16:59.918676 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:17:00.437379 master-0 kubenswrapper[4430]: I1203 14:17:00.437331 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:17:00.517435 master-0 kubenswrapper[4430]: I1203 14:17:00.517066 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:17:00.574198 master-0 kubenswrapper[4430]: I1203 14:17:00.574126 4430 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.574488 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" containerID="cri-o://fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68" gracePeriod=30 Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.574835 4430 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575208 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" containerID="cri-o://4cf8b9de739b42d0326a9c91865874c6acc457ec4a815cac41e5776a7dc74502" gracePeriod=30 Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: E1203 14:17:00.575321 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575342 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: E1203 14:17:00.575366 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575374 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: E1203 14:17:00.575392 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575400 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575599 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="cluster-policy-controller" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575621 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575631 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.575666 master-0 kubenswrapper[4430]: I1203 14:17:00.575648 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.576370 master-0 kubenswrapper[4430]: E1203 14:17:00.575811 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.576370 master-0 kubenswrapper[4430]: I1203 14:17:00.575822 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bce50c457ac1f4721bc81a570dd238a" containerName="kube-controller-manager" Dec 03 14:17:00.579437 master-0 kubenswrapper[4430]: I1203 14:17:00.577328 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.602128 master-0 kubenswrapper[4430]: I1203 14:17:00.602082 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.602350 master-0 kubenswrapper[4430]: I1203 14:17:00.602329 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.634332 master-0 kubenswrapper[4430]: I1203 14:17:00.633164 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:17:00.704074 master-0 kubenswrapper[4430]: I1203 14:17:00.704015 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.704189 master-0 kubenswrapper[4430]: I1203 14:17:00.704129 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.704189 master-0 kubenswrapper[4430]: I1203 14:17:00.704145 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.704286 master-0 kubenswrapper[4430]: I1203 14:17:00.704220 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.748343 master-0 kubenswrapper[4430]: I1203 14:17:00.748282 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:17:00.781822 master-0 kubenswrapper[4430]: I1203 14:17:00.781725 4430 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="6cb85e17-7e83-4845-834b-381f63dce73e" Dec 03 14:17:00.805477 master-0 kubenswrapper[4430]: I1203 14:17:00.805388 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") pod \"7bce50c457ac1f4721bc81a570dd238a\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " Dec 03 14:17:00.805477 master-0 kubenswrapper[4430]: I1203 14:17:00.805486 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") pod \"7bce50c457ac1f4721bc81a570dd238a\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805569 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") pod \"7bce50c457ac1f4721bc81a570dd238a\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805600 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") pod \"7bce50c457ac1f4721bc81a570dd238a\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805620 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") pod \"7bce50c457ac1f4721bc81a570dd238a\" (UID: \"7bce50c457ac1f4721bc81a570dd238a\") " Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805917 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "7bce50c457ac1f4721bc81a570dd238a" (UID: "7bce50c457ac1f4721bc81a570dd238a"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805952 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs" (OuterVolumeSpecName: "logs") pod "7bce50c457ac1f4721bc81a570dd238a" (UID: "7bce50c457ac1f4721bc81a570dd238a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805967 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets" (OuterVolumeSpecName: "secrets") pod "7bce50c457ac1f4721bc81a570dd238a" (UID: "7bce50c457ac1f4721bc81a570dd238a"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805983 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config" (OuterVolumeSpecName: "config") pod "7bce50c457ac1f4721bc81a570dd238a" (UID: "7bce50c457ac1f4721bc81a570dd238a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:00.806021 master-0 kubenswrapper[4430]: I1203 14:17:00.805998 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "7bce50c457ac1f4721bc81a570dd238a" (UID: "7bce50c457ac1f4721bc81a570dd238a"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:00.907057 master-0 kubenswrapper[4430]: I1203 14:17:00.907000 4430 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-secrets\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:00.907057 master-0 kubenswrapper[4430]: I1203 14:17:00.907040 4430 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:00.907057 master-0 kubenswrapper[4430]: I1203 14:17:00.907054 4430 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:00.907057 master-0 kubenswrapper[4430]: I1203 14:17:00.907063 4430 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:00.907057 master-0 kubenswrapper[4430]: I1203 14:17:00.907073 4430 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/7bce50c457ac1f4721bc81a570dd238a-logs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:00.926944 master-0 kubenswrapper[4430]: I1203 14:17:00.926868 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:00.948580 master-0 kubenswrapper[4430]: W1203 14:17:00.948523 4430 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1dbec7c25a38180c3a6691040eb5a8.slice/crio-d559782f641f31253f0525f84d6898f947ea64f48bb332a8b0d28e7e38036e65 WatchSource:0}: Error finding container d559782f641f31253f0525f84d6898f947ea64f48bb332a8b0d28e7e38036e65: Status 404 returned error can't find the container with id d559782f641f31253f0525f84d6898f947ea64f48bb332a8b0d28e7e38036e65 Dec 03 14:17:01.442563 master-0 kubenswrapper[4430]: I1203 14:17:01.442514 4430 generic.go:334] "Generic (PLEG): container finished" podID="9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" containerID="797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288" exitCode=0 Dec 03 14:17:01.442682 master-0 kubenswrapper[4430]: I1203 14:17:01.442612 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f","Type":"ContainerDied","Data":"797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288"} Dec 03 14:17:01.445088 master-0 kubenswrapper[4430]: I1203 14:17:01.445040 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb"} Dec 03 14:17:01.445169 master-0 kubenswrapper[4430]: I1203 14:17:01.445101 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"d559782f641f31253f0525f84d6898f947ea64f48bb332a8b0d28e7e38036e65"} Dec 03 14:17:01.447598 master-0 kubenswrapper[4430]: I1203 14:17:01.447560 4430 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="4cf8b9de739b42d0326a9c91865874c6acc457ec4a815cac41e5776a7dc74502" exitCode=0 Dec 03 14:17:01.447659 master-0 kubenswrapper[4430]: I1203 14:17:01.447606 4430 scope.go:117] "RemoveContainer" containerID="dfa3e2a5e850f1c2cd7d301ad8987da02b5536d592d454c2329b18b72b7128b7" Dec 03 14:17:01.447659 master-0 kubenswrapper[4430]: I1203 14:17:01.447618 4430 generic.go:334] "Generic (PLEG): container finished" podID="7bce50c457ac1f4721bc81a570dd238a" containerID="fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68" exitCode=0 Dec 03 14:17:01.447732 master-0 kubenswrapper[4430]: I1203 14:17:01.447652 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Dec 03 14:17:01.447785 master-0 kubenswrapper[4430]: I1203 14:17:01.447718 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e51da08bd48915ea30bfcde5f4d5d7acf0cc89b7a7be0c7aa34951094a4fe8" Dec 03 14:17:01.596150 master-0 kubenswrapper[4430]: I1203 14:17:01.596085 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bce50c457ac1f4721bc81a570dd238a" path="/var/lib/kubelet/pods/7bce50c457ac1f4721bc81a570dd238a/volumes" Dec 03 14:17:01.597261 master-0 kubenswrapper[4430]: I1203 14:17:01.597218 4430 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Dec 03 14:17:01.617037 master-0 kubenswrapper[4430]: I1203 14:17:01.616991 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Dec 03 14:17:01.617037 master-0 kubenswrapper[4430]: I1203 14:17:01.617029 4430 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="6cb85e17-7e83-4845-834b-381f63dce73e" Dec 03 14:17:01.622466 master-0 kubenswrapper[4430]: I1203 14:17:01.622384 4430 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:17:01.622594 master-0 kubenswrapper[4430]: I1203 14:17:01.622519 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:17:01.623739 master-0 kubenswrapper[4430]: I1203 14:17:01.623710 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Dec 03 14:17:01.623830 master-0 kubenswrapper[4430]: I1203 14:17:01.623815 4430 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="6cb85e17-7e83-4845-834b-381f63dce73e" Dec 03 14:17:02.579301 master-0 kubenswrapper[4430]: I1203 14:17:02.578990 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"bf6199bcbec88449c79d8c0fecef851ed148f249eec05a4f4ea3ebc9e8451088"} Dec 03 14:17:02.579301 master-0 kubenswrapper[4430]: I1203 14:17:02.579125 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3"} Dec 03 14:17:02.579301 master-0 kubenswrapper[4430]: I1203 14:17:02.579144 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769"} Dec 03 14:17:02.702355 master-0 kubenswrapper[4430]: I1203 14:17:02.701851 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.701825692 podStartE2EDuration="2.701825692s" podCreationTimestamp="2025-12-03 14:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:17:02.695039951 +0000 UTC m=+523.317954027" watchObservedRunningTime="2025-12-03 14:17:02.701825692 +0000 UTC m=+523.324739768" Dec 03 14:17:02.952272 master-0 kubenswrapper[4430]: I1203 14:17:02.952206 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:17:03.077685 master-0 kubenswrapper[4430]: I1203 14:17:03.077592 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock\") pod \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " Dec 03 14:17:03.077993 master-0 kubenswrapper[4430]: I1203 14:17:03.077802 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock" (OuterVolumeSpecName: "var-lock") pod "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" (UID: "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:03.078753 master-0 kubenswrapper[4430]: I1203 14:17:03.078689 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access\") pod \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " Dec 03 14:17:03.079619 master-0 kubenswrapper[4430]: I1203 14:17:03.079580 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir\") pod \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\" (UID: \"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\") " Dec 03 14:17:03.079870 master-0 kubenswrapper[4430]: I1203 14:17:03.079803 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" (UID: "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:03.080367 master-0 kubenswrapper[4430]: I1203 14:17:03.080330 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:03.080367 master-0 kubenswrapper[4430]: I1203 14:17:03.080361 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:03.090040 master-0 kubenswrapper[4430]: I1203 14:17:03.089975 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" (UID: "9362c4d7-96a9-4400-b7d3-fd4f196b3d0f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:17:03.182325 master-0 kubenswrapper[4430]: I1203 14:17:03.182142 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9362c4d7-96a9-4400-b7d3-fd4f196b3d0f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:03.634511 master-0 kubenswrapper[4430]: I1203 14:17:03.634454 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f","Type":"ContainerDied","Data":"69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311"} Dec 03 14:17:03.634511 master-0 kubenswrapper[4430]: I1203 14:17:03.634499 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:17:03.635095 master-0 kubenswrapper[4430]: I1203 14:17:03.634513 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e37e3173fcf7d95264f65a455099a36543634651d4e30d4c62876695e1d311" Dec 03 14:17:04.264315 master-0 kubenswrapper[4430]: I1203 14:17:04.264267 4430 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Dec 03 14:17:04.264972 master-0 kubenswrapper[4430]: I1203 14:17:04.264941 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcdctl" containerID="cri-o://963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" gracePeriod=30 Dec 03 14:17:04.265095 master-0 kubenswrapper[4430]: I1203 14:17:04.264996 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-rev" containerID="cri-o://79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" gracePeriod=30 Dec 03 14:17:04.265159 master-0 kubenswrapper[4430]: I1203 14:17:04.265075 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-readyz" containerID="cri-o://e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" gracePeriod=30 Dec 03 14:17:04.265205 master-0 kubenswrapper[4430]: I1203 14:17:04.265117 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-metrics" containerID="cri-o://76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" gracePeriod=30 Dec 03 14:17:04.265257 master-0 kubenswrapper[4430]: I1203 14:17:04.265206 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" containerID="cri-o://39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" gracePeriod=30 Dec 03 14:17:04.268037 master-0 kubenswrapper[4430]: I1203 14:17:04.267936 4430 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Dec 03 14:17:04.268568 master-0 kubenswrapper[4430]: E1203 14:17:04.268540 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-readyz" Dec 03 14:17:04.268568 master-0 kubenswrapper[4430]: I1203 14:17:04.268565 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-readyz" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: E1203 14:17:04.268600 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-ensure-env-vars" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: I1203 14:17:04.268610 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-ensure-env-vars" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: E1203 14:17:04.268617 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="setup" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: I1203 14:17:04.268625 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="setup" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: E1203 14:17:04.268636 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcdctl" Dec 03 14:17:04.268677 master-0 kubenswrapper[4430]: I1203 14:17:04.268643 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcdctl" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: E1203 14:17:04.268655 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" containerName="installer" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: I1203 14:17:04.268696 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" containerName="installer" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: E1203 14:17:04.268719 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: I1203 14:17:04.268730 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: E1203 14:17:04.268741 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-resources-copy" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: I1203 14:17:04.268767 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-resources-copy" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: E1203 14:17:04.268811 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-rev" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: I1203 14:17:04.268819 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-rev" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: E1203 14:17:04.268854 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-metrics" Dec 03 14:17:04.269053 master-0 kubenswrapper[4430]: I1203 14:17:04.268863 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-metrics" Dec 03 14:17:04.269564 master-0 kubenswrapper[4430]: I1203 14:17:04.269303 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="9362c4d7-96a9-4400-b7d3-fd4f196b3d0f" containerName="installer" Dec 03 14:17:04.269564 master-0 kubenswrapper[4430]: I1203 14:17:04.269322 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="setup" Dec 03 14:17:04.269564 master-0 kubenswrapper[4430]: I1203 14:17:04.269367 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-metrics" Dec 03 14:17:04.269716 master-0 kubenswrapper[4430]: I1203 14:17:04.269647 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcdctl" Dec 03 14:17:04.269716 master-0 kubenswrapper[4430]: I1203 14:17:04.269668 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-ensure-env-vars" Dec 03 14:17:04.269716 master-0 kubenswrapper[4430]: I1203 14:17:04.269678 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd" Dec 03 14:17:04.269716 master-0 kubenswrapper[4430]: I1203 14:17:04.269712 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-resources-copy" Dec 03 14:17:04.269891 master-0 kubenswrapper[4430]: I1203 14:17:04.269720 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-readyz" Dec 03 14:17:04.269891 master-0 kubenswrapper[4430]: I1203 14:17:04.269727 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf07eb54db570834b7c9a90b6b07403" containerName="etcd-rev" Dec 03 14:17:04.401390 master-0 kubenswrapper[4430]: I1203 14:17:04.401293 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.401722 master-0 kubenswrapper[4430]: I1203 14:17:04.401397 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.401722 master-0 kubenswrapper[4430]: I1203 14:17:04.401548 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.401722 master-0 kubenswrapper[4430]: I1203 14:17:04.401586 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.401722 master-0 kubenswrapper[4430]: I1203 14:17:04.401613 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.401722 master-0 kubenswrapper[4430]: I1203 14:17:04.401705 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503271 master-0 kubenswrapper[4430]: I1203 14:17:04.503203 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503271 master-0 kubenswrapper[4430]: I1203 14:17:04.503286 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503315 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503345 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503368 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503402 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503408 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503443 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503506 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503569 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.503702 master-0 kubenswrapper[4430]: I1203 14:17:04.503590 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.504000 master-0 kubenswrapper[4430]: I1203 14:17:04.503590 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:17:04.646028 master-0 kubenswrapper[4430]: I1203 14:17:04.645870 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-rev/1.log" Dec 03 14:17:04.647162 master-0 kubenswrapper[4430]: I1203 14:17:04.647132 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-metrics/1.log" Dec 03 14:17:04.649806 master-0 kubenswrapper[4430]: I1203 14:17:04.649763 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" exitCode=2 Dec 03 14:17:04.649806 master-0 kubenswrapper[4430]: I1203 14:17:04.649799 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" exitCode=0 Dec 03 14:17:04.649915 master-0 kubenswrapper[4430]: I1203 14:17:04.649810 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" exitCode=2 Dec 03 14:17:10.927403 master-0 kubenswrapper[4430]: I1203 14:17:10.927311 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:10.928258 master-0 kubenswrapper[4430]: I1203 14:17:10.927413 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:10.928258 master-0 kubenswrapper[4430]: I1203 14:17:10.928020 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:10.928258 master-0 kubenswrapper[4430]: I1203 14:17:10.928032 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:10.931943 master-0 kubenswrapper[4430]: I1203 14:17:10.931904 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:10.932280 master-0 kubenswrapper[4430]: I1203 14:17:10.932230 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:11.714764 master-0 kubenswrapper[4430]: I1203 14:17:11.714658 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:11.715623 master-0 kubenswrapper[4430]: I1203 14:17:11.715584 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:17:15.423537 master-0 kubenswrapper[4430]: E1203 14:17:15.423393 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:17:18.762854 master-0 kubenswrapper[4430]: I1203 14:17:18.762767 4430 generic.go:334] "Generic (PLEG): container finished" podID="3f539ea7-39a7-422f-82d9-7603eede84cf" containerID="8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3" exitCode=0 Dec 03 14:17:18.762854 master-0 kubenswrapper[4430]: I1203 14:17:18.762835 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"3f539ea7-39a7-422f-82d9-7603eede84cf","Type":"ContainerDied","Data":"8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3"} Dec 03 14:17:19.853582 master-0 kubenswrapper[4430]: I1203 14:17:19.853500 4430 scope.go:117] "RemoveContainer" containerID="fdf56fc794aa77373b36eb33a16bfc344506e67df2fd75e2ef4b6b33e973db68" Dec 03 14:17:19.916061 master-0 kubenswrapper[4430]: E1203 14:17:19.915979 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a\": container with ID starting with e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a not found: ID does not exist" containerID="e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a" Dec 03 14:17:19.916244 master-0 kubenswrapper[4430]: I1203 14:17:19.916076 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a" err="rpc error: code = NotFound desc = could not find container \"e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a\": container with ID starting with e61519d8185d0bcf145b0e0b6418994041bef16f5e1a99ebd43381fc375fbc4a not found: ID does not exist" Dec 03 14:17:19.917389 master-0 kubenswrapper[4430]: E1203 14:17:19.917323 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab\": container with ID starting with 971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab not found: ID does not exist" containerID="971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab" Dec 03 14:17:19.917486 master-0 kubenswrapper[4430]: I1203 14:17:19.917412 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab" err="rpc error: code = NotFound desc = could not find container \"971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab\": container with ID starting with 971b7dc6d62caa743a6f63b49d02247325b0d2e0c7ba426f9388d2ab4d3fb2ab not found: ID does not exist" Dec 03 14:17:19.918349 master-0 kubenswrapper[4430]: E1203 14:17:19.918305 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528\": container with ID starting with 5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528 not found: ID does not exist" containerID="5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528" Dec 03 14:17:19.918461 master-0 kubenswrapper[4430]: I1203 14:17:19.918360 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528" err="rpc error: code = NotFound desc = could not find container \"5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528\": container with ID starting with 5bb0308e170c5d5c040c6fc90c8f98e7d9d11dd42b0e1cad5f0116d0f60a0528 not found: ID does not exist" Dec 03 14:17:19.919074 master-0 kubenswrapper[4430]: E1203 14:17:19.919041 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6\": container with ID starting with 8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6 not found: ID does not exist" containerID="8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6" Dec 03 14:17:19.919167 master-0 kubenswrapper[4430]: I1203 14:17:19.919074 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6" err="rpc error: code = NotFound desc = could not find container \"8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6\": container with ID starting with 8f73d07981bb6e2708a023f880f9d08e383d2f1f2c1c38a57ddc160e4e65f7c6 not found: ID does not exist" Dec 03 14:17:19.919614 master-0 kubenswrapper[4430]: E1203 14:17:19.919569 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23\": container with ID starting with 84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23 not found: ID does not exist" containerID="84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23" Dec 03 14:17:19.919681 master-0 kubenswrapper[4430]: I1203 14:17:19.919611 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23" err="rpc error: code = NotFound desc = could not find container \"84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23\": container with ID starting with 84982d642af54835db849f1bcd0f14d120aa1e0962e4ca08d91ad87aeda01d23 not found: ID does not exist" Dec 03 14:17:19.920079 master-0 kubenswrapper[4430]: E1203 14:17:19.920044 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0\": container with ID starting with 074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0 not found: ID does not exist" containerID="074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0" Dec 03 14:17:19.920371 master-0 kubenswrapper[4430]: I1203 14:17:19.920077 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0" err="rpc error: code = NotFound desc = could not find container \"074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0\": container with ID starting with 074b7d192a45a3f8190d3adfb05a61227fa915f52e5a45f9dcc836a4bad6bff0 not found: ID does not exist" Dec 03 14:17:19.920858 master-0 kubenswrapper[4430]: E1203 14:17:19.920815 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072\": container with ID starting with 57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072 not found: ID does not exist" containerID="57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072" Dec 03 14:17:19.921071 master-0 kubenswrapper[4430]: I1203 14:17:19.921048 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072" err="rpc error: code = NotFound desc = could not find container \"57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072\": container with ID starting with 57a51ad0c99e09b55f68b6e38d9043c1e0994ef3830325f571f1531a77680072 not found: ID does not exist" Dec 03 14:17:19.921883 master-0 kubenswrapper[4430]: E1203 14:17:19.921834 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c\": container with ID starting with 28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c not found: ID does not exist" containerID="28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c" Dec 03 14:17:19.921983 master-0 kubenswrapper[4430]: I1203 14:17:19.921900 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c" err="rpc error: code = NotFound desc = could not find container \"28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c\": container with ID starting with 28d4248523e80a5e2739ca252a05923a0b9a9f571d7fbd6b774b4a753089e35c not found: ID does not exist" Dec 03 14:17:19.922855 master-0 kubenswrapper[4430]: E1203 14:17:19.922826 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc\": container with ID starting with 50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc not found: ID does not exist" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" Dec 03 14:17:19.922928 master-0 kubenswrapper[4430]: I1203 14:17:19.922852 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc" err="rpc error: code = NotFound desc = could not find container \"50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc\": container with ID starting with 50c17dbdf0a105c127d96b59062c421af48f858d5ce7d40fd636396a72a26edc not found: ID does not exist" Dec 03 14:17:19.923193 master-0 kubenswrapper[4430]: E1203 14:17:19.923163 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22\": container with ID starting with b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22 not found: ID does not exist" containerID="b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22" Dec 03 14:17:19.923411 master-0 kubenswrapper[4430]: I1203 14:17:19.923195 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22" err="rpc error: code = NotFound desc = could not find container \"b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22\": container with ID starting with b396012262e4eaaed7818b1f14f11d074df5245d49f0b3d66105100ccf06ce22 not found: ID does not exist" Dec 03 14:17:19.926947 master-0 kubenswrapper[4430]: E1203 14:17:19.926908 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd\": container with ID starting with 248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd not found: ID does not exist" containerID="248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd" Dec 03 14:17:19.927108 master-0 kubenswrapper[4430]: I1203 14:17:19.927083 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd" err="rpc error: code = NotFound desc = could not find container \"248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd\": container with ID starting with 248a7857907bae9cfdda294f613f627ef0df23e51f75bd9e0ba43f55a6aa89cd not found: ID does not exist" Dec 03 14:17:19.927646 master-0 kubenswrapper[4430]: E1203 14:17:19.927614 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c\": container with ID starting with 6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c not found: ID does not exist" containerID="6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c" Dec 03 14:17:19.927747 master-0 kubenswrapper[4430]: I1203 14:17:19.927653 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c" err="rpc error: code = NotFound desc = could not find container \"6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c\": container with ID starting with 6a06f41e53eb0649a2b294b11fe072fa162451a3dbc51b8d7072ed7b1f6d5d1c not found: ID does not exist" Dec 03 14:17:19.928108 master-0 kubenswrapper[4430]: E1203 14:17:19.928075 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856\": container with ID starting with 2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856 not found: ID does not exist" containerID="2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856" Dec 03 14:17:19.928216 master-0 kubenswrapper[4430]: I1203 14:17:19.928191 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856" err="rpc error: code = NotFound desc = could not find container \"2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856\": container with ID starting with 2b6a561bffcde2db391c11583fb225497fab783c3c4b310ad500fd832df2b856 not found: ID does not exist" Dec 03 14:17:19.928672 master-0 kubenswrapper[4430]: E1203 14:17:19.928637 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2\": container with ID starting with 38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2 not found: ID does not exist" containerID="38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2" Dec 03 14:17:19.928750 master-0 kubenswrapper[4430]: I1203 14:17:19.928676 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2" err="rpc error: code = NotFound desc = could not find container \"38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2\": container with ID starting with 38f06c797c581a4bdd935d4ba09267697af65328fc483be518d3d131527ca1e2 not found: ID does not exist" Dec 03 14:17:19.929065 master-0 kubenswrapper[4430]: E1203 14:17:19.929027 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c\": container with ID starting with 13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c not found: ID does not exist" containerID="13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c" Dec 03 14:17:19.929065 master-0 kubenswrapper[4430]: I1203 14:17:19.929060 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c" err="rpc error: code = NotFound desc = could not find container \"13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c\": container with ID starting with 13e102b299bffaf705c779fbc9a162b6872657eb14e65030a342e5de213f533c not found: ID does not exist" Dec 03 14:17:19.929618 master-0 kubenswrapper[4430]: E1203 14:17:19.929401 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe\": container with ID starting with 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe not found: ID does not exist" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" Dec 03 14:17:19.929618 master-0 kubenswrapper[4430]: I1203 14:17:19.929444 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe" err="rpc error: code = NotFound desc = could not find container \"2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe\": container with ID starting with 2b06c487d0757b0cbf97890996997c7e3bcf1af907ce06d6c3b1849dd77212fe not found: ID does not exist" Dec 03 14:17:19.929766 master-0 kubenswrapper[4430]: E1203 14:17:19.929731 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2\": container with ID starting with 644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2 not found: ID does not exist" containerID="644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2" Dec 03 14:17:19.929825 master-0 kubenswrapper[4430]: I1203 14:17:19.929762 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2" err="rpc error: code = NotFound desc = could not find container \"644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2\": container with ID starting with 644eff3e47783a7f63320e76e8a715a971f5ecfb24775f32f828b5d7c5c08ac2 not found: ID does not exist" Dec 03 14:17:20.100059 master-0 kubenswrapper[4430]: I1203 14:17:20.100007 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:17:20.213370 master-0 kubenswrapper[4430]: I1203 14:17:20.213006 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access\") pod \"3f539ea7-39a7-422f-82d9-7603eede84cf\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " Dec 03 14:17:20.213370 master-0 kubenswrapper[4430]: I1203 14:17:20.213173 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock\") pod \"3f539ea7-39a7-422f-82d9-7603eede84cf\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " Dec 03 14:17:20.213370 master-0 kubenswrapper[4430]: I1203 14:17:20.213335 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir\") pod \"3f539ea7-39a7-422f-82d9-7603eede84cf\" (UID: \"3f539ea7-39a7-422f-82d9-7603eede84cf\") " Dec 03 14:17:20.214552 master-0 kubenswrapper[4430]: I1203 14:17:20.213371 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock" (OuterVolumeSpecName: "var-lock") pod "3f539ea7-39a7-422f-82d9-7603eede84cf" (UID: "3f539ea7-39a7-422f-82d9-7603eede84cf"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:20.214552 master-0 kubenswrapper[4430]: I1203 14:17:20.213452 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f539ea7-39a7-422f-82d9-7603eede84cf" (UID: "3f539ea7-39a7-422f-82d9-7603eede84cf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:20.214552 master-0 kubenswrapper[4430]: I1203 14:17:20.213654 4430 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:20.214552 master-0 kubenswrapper[4430]: I1203 14:17:20.213666 4430 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f539ea7-39a7-422f-82d9-7603eede84cf-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:20.218038 master-0 kubenswrapper[4430]: I1203 14:17:20.217988 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f539ea7-39a7-422f-82d9-7603eede84cf" (UID: "3f539ea7-39a7-422f-82d9-7603eede84cf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:17:20.315808 master-0 kubenswrapper[4430]: I1203 14:17:20.315639 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f539ea7-39a7-422f-82d9-7603eede84cf-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:20.780299 master-0 kubenswrapper[4430]: I1203 14:17:20.780249 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"3f539ea7-39a7-422f-82d9-7603eede84cf","Type":"ContainerDied","Data":"b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237"} Dec 03 14:17:20.780650 master-0 kubenswrapper[4430]: I1203 14:17:20.780626 4430 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b905f20d9aa549ce3b8dcbc9b46ceddcf945e70d5602e2655d07e979176ad237" Dec 03 14:17:20.780759 master-0 kubenswrapper[4430]: I1203 14:17:20.780330 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:17:22.488367 master-0 kubenswrapper[4430]: I1203 14:17:22.488252 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:17:22.530520 master-0 kubenswrapper[4430]: I1203 14:17:22.530452 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:17:22.842732 master-0 kubenswrapper[4430]: I1203 14:17:22.842606 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:17:25.425122 master-0 kubenswrapper[4430]: E1203 14:17:25.425023 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:17:25.557124 master-0 kubenswrapper[4430]: I1203 14:17:25.556981 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-59fc685495-qcxmz" podUID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" containerName="console" containerID="cri-o://a4e04e6c524ae142d3954a6e6c8326b5e0e2ba6787ee66a27feaa743b480ba37" gracePeriod=15 Dec 03 14:17:25.836556 master-0 kubenswrapper[4430]: I1203 14:17:25.835814 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59fc685495-qcxmz_1566ad2b-b965-4ba3-8a8b-f93b39e732c8/console/0.log" Dec 03 14:17:25.836556 master-0 kubenswrapper[4430]: I1203 14:17:25.835887 4430 generic.go:334] "Generic (PLEG): container finished" podID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" containerID="a4e04e6c524ae142d3954a6e6c8326b5e0e2ba6787ee66a27feaa743b480ba37" exitCode=2 Dec 03 14:17:25.836556 master-0 kubenswrapper[4430]: I1203 14:17:25.835930 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59fc685495-qcxmz" event={"ID":"1566ad2b-b965-4ba3-8a8b-f93b39e732c8","Type":"ContainerDied","Data":"a4e04e6c524ae142d3954a6e6c8326b5e0e2ba6787ee66a27feaa743b480ba37"} Dec 03 14:17:26.106719 master-0 kubenswrapper[4430]: I1203 14:17:26.106666 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59fc685495-qcxmz_1566ad2b-b965-4ba3-8a8b-f93b39e732c8/console/0.log" Dec 03 14:17:26.106954 master-0 kubenswrapper[4430]: I1203 14:17:26.106740 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:17:26.132900 master-0 kubenswrapper[4430]: I1203 14:17:26.132850 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.132900 master-0 kubenswrapper[4430]: I1203 14:17:26.132896 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.133051 master-0 kubenswrapper[4430]: I1203 14:17:26.132917 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2zs6\" (UniqueName: \"kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.133051 master-0 kubenswrapper[4430]: I1203 14:17:26.132987 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.133051 master-0 kubenswrapper[4430]: I1203 14:17:26.133010 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.133051 master-0 kubenswrapper[4430]: I1203 14:17:26.133036 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.133227 master-0 kubenswrapper[4430]: I1203 14:17:26.133092 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert\") pod \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\" (UID: \"1566ad2b-b965-4ba3-8a8b-f93b39e732c8\") " Dec 03 14:17:26.134130 master-0 kubenswrapper[4430]: I1203 14:17:26.134081 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config" (OuterVolumeSpecName: "console-config") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:17:26.134289 master-0 kubenswrapper[4430]: I1203 14:17:26.134261 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:17:26.134289 master-0 kubenswrapper[4430]: I1203 14:17:26.134273 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca" (OuterVolumeSpecName: "service-ca") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:17:26.134626 master-0 kubenswrapper[4430]: I1203 14:17:26.134528 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:17:26.136533 master-0 kubenswrapper[4430]: I1203 14:17:26.136497 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:17:26.136854 master-0 kubenswrapper[4430]: I1203 14:17:26.136775 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6" (OuterVolumeSpecName: "kube-api-access-r2zs6") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "kube-api-access-r2zs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:17:26.137346 master-0 kubenswrapper[4430]: I1203 14:17:26.137289 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1566ad2b-b965-4ba3-8a8b-f93b39e732c8" (UID: "1566ad2b-b965-4ba3-8a8b-f93b39e732c8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:17:26.235454 master-0 kubenswrapper[4430]: I1203 14:17:26.235338 4430 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235454 master-0 kubenswrapper[4430]: I1203 14:17:26.235409 4430 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235454 master-0 kubenswrapper[4430]: I1203 14:17:26.235457 4430 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235976 master-0 kubenswrapper[4430]: I1203 14:17:26.235480 4430 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235976 master-0 kubenswrapper[4430]: I1203 14:17:26.235501 4430 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-console-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235976 master-0 kubenswrapper[4430]: I1203 14:17:26.235519 4430 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.235976 master-0 kubenswrapper[4430]: I1203 14:17:26.235539 4430 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2zs6\" (UniqueName: \"kubernetes.io/projected/1566ad2b-b965-4ba3-8a8b-f93b39e732c8-kube-api-access-r2zs6\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:26.847877 master-0 kubenswrapper[4430]: I1203 14:17:26.847831 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59fc685495-qcxmz_1566ad2b-b965-4ba3-8a8b-f93b39e732c8/console/0.log" Dec 03 14:17:26.848491 master-0 kubenswrapper[4430]: I1203 14:17:26.847896 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59fc685495-qcxmz" event={"ID":"1566ad2b-b965-4ba3-8a8b-f93b39e732c8","Type":"ContainerDied","Data":"4956cd007e6ecb51108f9e9a78c71dec3f8014c0541cfe11d2ebc9322d1d01de"} Dec 03 14:17:26.848491 master-0 kubenswrapper[4430]: I1203 14:17:26.847941 4430 scope.go:117] "RemoveContainer" containerID="a4e04e6c524ae142d3954a6e6c8326b5e0e2ba6787ee66a27feaa743b480ba37" Dec 03 14:17:26.848491 master-0 kubenswrapper[4430]: I1203 14:17:26.848066 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59fc685495-qcxmz" Dec 03 14:17:34.907339 master-0 kubenswrapper[4430]: I1203 14:17:34.907235 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-rev/1.log" Dec 03 14:17:34.915645 master-0 kubenswrapper[4430]: I1203 14:17:34.910733 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-metrics/1.log" Dec 03 14:17:34.921269 master-0 kubenswrapper[4430]: I1203 14:17:34.921210 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd/1.log" Dec 03 14:17:34.925072 master-0 kubenswrapper[4430]: I1203 14:17:34.925018 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcdctl/2.log" Dec 03 14:17:34.927483 master-0 kubenswrapper[4430]: I1203 14:17:34.927405 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:17:34.929019 master-0 kubenswrapper[4430]: I1203 14:17:34.928967 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-rev/1.log" Dec 03 14:17:34.930803 master-0 kubenswrapper[4430]: I1203 14:17:34.930764 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd-metrics/1.log" Dec 03 14:17:34.931667 master-0 kubenswrapper[4430]: I1203 14:17:34.931624 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcd/1.log" Dec 03 14:17:34.932487 master-0 kubenswrapper[4430]: I1203 14:17:34.932411 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_ebf07eb54db570834b7c9a90b6b07403/etcdctl/2.log" Dec 03 14:17:34.934067 master-0 kubenswrapper[4430]: I1203 14:17:34.934017 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" exitCode=137 Dec 03 14:17:34.934067 master-0 kubenswrapper[4430]: I1203 14:17:34.934063 4430 generic.go:334] "Generic (PLEG): container finished" podID="ebf07eb54db570834b7c9a90b6b07403" containerID="963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" exitCode=137 Dec 03 14:17:34.934207 master-0 kubenswrapper[4430]: I1203 14:17:34.934141 4430 scope.go:117] "RemoveContainer" containerID="79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" Dec 03 14:17:34.961608 master-0 kubenswrapper[4430]: I1203 14:17:34.961542 4430 scope.go:117] "RemoveContainer" containerID="e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" Dec 03 14:17:34.987376 master-0 kubenswrapper[4430]: I1203 14:17:34.987285 4430 scope.go:117] "RemoveContainer" containerID="76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" Dec 03 14:17:35.008245 master-0 kubenswrapper[4430]: I1203 14:17:35.008191 4430 scope.go:117] "RemoveContainer" containerID="39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" Dec 03 14:17:35.034827 master-0 kubenswrapper[4430]: I1203 14:17:35.034724 4430 scope.go:117] "RemoveContainer" containerID="963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" Dec 03 14:17:35.058992 master-0 kubenswrapper[4430]: I1203 14:17:35.058875 4430 scope.go:117] "RemoveContainer" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" Dec 03 14:17:35.086532 master-0 kubenswrapper[4430]: I1203 14:17:35.086453 4430 scope.go:117] "RemoveContainer" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" Dec 03 14:17:35.095970 master-0 kubenswrapper[4430]: I1203 14:17:35.095904 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096137 master-0 kubenswrapper[4430]: I1203 14:17:35.096002 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.096283 master-0 kubenswrapper[4430]: I1203 14:17:35.096237 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096283 master-0 kubenswrapper[4430]: I1203 14:17:35.096272 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.096480 master-0 kubenswrapper[4430]: I1203 14:17:35.096311 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096480 master-0 kubenswrapper[4430]: I1203 14:17:35.096391 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.096762 master-0 kubenswrapper[4430]: I1203 14:17:35.096559 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096762 master-0 kubenswrapper[4430]: I1203 14:17:35.096617 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096762 master-0 kubenswrapper[4430]: I1203 14:17:35.096678 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir" (OuterVolumeSpecName: "data-dir") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.096762 master-0 kubenswrapper[4430]: I1203 14:17:35.096725 4430 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") pod \"ebf07eb54db570834b7c9a90b6b07403\" (UID: \"ebf07eb54db570834b7c9a90b6b07403\") " Dec 03 14:17:35.096762 master-0 kubenswrapper[4430]: I1203 14:17:35.096767 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir" (OuterVolumeSpecName: "log-dir") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.097134 master-0 kubenswrapper[4430]: I1203 14:17:35.096852 4430 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebf07eb54db570834b7c9a90b6b07403" (UID: "ebf07eb54db570834b7c9a90b6b07403"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:17:35.097460 master-0 kubenswrapper[4430]: I1203 14:17:35.097385 4430 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-data-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.097576 master-0 kubenswrapper[4430]: I1203 14:17:35.097472 4430 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-log-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.097576 master-0 kubenswrapper[4430]: I1203 14:17:35.097501 4430 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.097576 master-0 kubenswrapper[4430]: I1203 14:17:35.097531 4430 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.097576 master-0 kubenswrapper[4430]: I1203 14:17:35.097553 4430 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.097878 master-0 kubenswrapper[4430]: I1203 14:17:35.097580 4430 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/ebf07eb54db570834b7c9a90b6b07403-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:17:35.113239 master-0 kubenswrapper[4430]: I1203 14:17:35.113175 4430 scope.go:117] "RemoveContainer" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" Dec 03 14:17:35.152855 master-0 kubenswrapper[4430]: I1203 14:17:35.151523 4430 scope.go:117] "RemoveContainer" containerID="79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" Dec 03 14:17:35.155182 master-0 kubenswrapper[4430]: E1203 14:17:35.155100 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82\": container with ID starting with 79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82 not found: ID does not exist" containerID="79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" Dec 03 14:17:35.155292 master-0 kubenswrapper[4430]: I1203 14:17:35.155195 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82"} err="failed to get container status \"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82\": rpc error: code = NotFound desc = could not find container \"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82\": container with ID starting with 79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82 not found: ID does not exist" Dec 03 14:17:35.155292 master-0 kubenswrapper[4430]: I1203 14:17:35.155239 4430 scope.go:117] "RemoveContainer" containerID="e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" Dec 03 14:17:35.156044 master-0 kubenswrapper[4430]: E1203 14:17:35.155821 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a\": container with ID starting with e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a not found: ID does not exist" containerID="e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" Dec 03 14:17:35.156117 master-0 kubenswrapper[4430]: I1203 14:17:35.156063 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a"} err="failed to get container status \"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a\": rpc error: code = NotFound desc = could not find container \"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a\": container with ID starting with e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a not found: ID does not exist" Dec 03 14:17:35.156170 master-0 kubenswrapper[4430]: I1203 14:17:35.156121 4430 scope.go:117] "RemoveContainer" containerID="76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" Dec 03 14:17:35.156778 master-0 kubenswrapper[4430]: E1203 14:17:35.156709 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3\": container with ID starting with 76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3 not found: ID does not exist" containerID="76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" Dec 03 14:17:35.156871 master-0 kubenswrapper[4430]: I1203 14:17:35.156785 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3"} err="failed to get container status \"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3\": rpc error: code = NotFound desc = could not find container \"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3\": container with ID starting with 76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3 not found: ID does not exist" Dec 03 14:17:35.156871 master-0 kubenswrapper[4430]: I1203 14:17:35.156818 4430 scope.go:117] "RemoveContainer" containerID="39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" Dec 03 14:17:35.157320 master-0 kubenswrapper[4430]: E1203 14:17:35.157266 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367\": container with ID starting with 39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367 not found: ID does not exist" containerID="39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" Dec 03 14:17:35.157401 master-0 kubenswrapper[4430]: I1203 14:17:35.157318 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367"} err="failed to get container status \"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367\": rpc error: code = NotFound desc = could not find container \"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367\": container with ID starting with 39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367 not found: ID does not exist" Dec 03 14:17:35.157401 master-0 kubenswrapper[4430]: I1203 14:17:35.157351 4430 scope.go:117] "RemoveContainer" containerID="963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" Dec 03 14:17:35.157880 master-0 kubenswrapper[4430]: E1203 14:17:35.157821 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be\": container with ID starting with 963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be not found: ID does not exist" containerID="963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" Dec 03 14:17:35.157946 master-0 kubenswrapper[4430]: I1203 14:17:35.157885 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be"} err="failed to get container status \"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be\": rpc error: code = NotFound desc = could not find container \"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be\": container with ID starting with 963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be not found: ID does not exist" Dec 03 14:17:35.158048 master-0 kubenswrapper[4430]: I1203 14:17:35.157939 4430 scope.go:117] "RemoveContainer" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" Dec 03 14:17:35.158402 master-0 kubenswrapper[4430]: E1203 14:17:35.158352 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045\": container with ID starting with 5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045 not found: ID does not exist" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" Dec 03 14:17:35.158482 master-0 kubenswrapper[4430]: I1203 14:17:35.158399 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045"} err="failed to get container status \"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045\": rpc error: code = NotFound desc = could not find container \"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045\": container with ID starting with 5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045 not found: ID does not exist" Dec 03 14:17:35.158531 master-0 kubenswrapper[4430]: I1203 14:17:35.158485 4430 scope.go:117] "RemoveContainer" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" Dec 03 14:17:35.159122 master-0 kubenswrapper[4430]: E1203 14:17:35.159057 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1\": container with ID starting with 35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1 not found: ID does not exist" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" Dec 03 14:17:35.159199 master-0 kubenswrapper[4430]: I1203 14:17:35.159127 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1"} err="failed to get container status \"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1\": rpc error: code = NotFound desc = could not find container \"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1\": container with ID starting with 35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1 not found: ID does not exist" Dec 03 14:17:35.159199 master-0 kubenswrapper[4430]: I1203 14:17:35.159159 4430 scope.go:117] "RemoveContainer" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" Dec 03 14:17:35.159640 master-0 kubenswrapper[4430]: E1203 14:17:35.159596 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7\": container with ID starting with c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7 not found: ID does not exist" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" Dec 03 14:17:35.159701 master-0 kubenswrapper[4430]: I1203 14:17:35.159644 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7"} err="failed to get container status \"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7\": rpc error: code = NotFound desc = could not find container \"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7\": container with ID starting with c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7 not found: ID does not exist" Dec 03 14:17:35.159701 master-0 kubenswrapper[4430]: I1203 14:17:35.159673 4430 scope.go:117] "RemoveContainer" containerID="79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82" Dec 03 14:17:35.160069 master-0 kubenswrapper[4430]: I1203 14:17:35.160020 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82"} err="failed to get container status \"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82\": rpc error: code = NotFound desc = could not find container \"79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82\": container with ID starting with 79c235878ca06f7083c9ab9750fa9c2d0ddbe2fb0d20ac29a46db097ba311a82 not found: ID does not exist" Dec 03 14:17:35.160069 master-0 kubenswrapper[4430]: I1203 14:17:35.160063 4430 scope.go:117] "RemoveContainer" containerID="e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a" Dec 03 14:17:35.160612 master-0 kubenswrapper[4430]: I1203 14:17:35.160553 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a"} err="failed to get container status \"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a\": rpc error: code = NotFound desc = could not find container \"e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a\": container with ID starting with e268587eb8a5af6cf98969c62354a10113d556a7fca88b3e241640fda705c49a not found: ID does not exist" Dec 03 14:17:35.160676 master-0 kubenswrapper[4430]: I1203 14:17:35.160612 4430 scope.go:117] "RemoveContainer" containerID="76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3" Dec 03 14:17:35.161106 master-0 kubenswrapper[4430]: I1203 14:17:35.161048 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3"} err="failed to get container status \"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3\": rpc error: code = NotFound desc = could not find container \"76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3\": container with ID starting with 76a08e3ec9cc1e6cd6cac4448aac141b7ad630135e3b628b941e62318eb50ac3 not found: ID does not exist" Dec 03 14:17:35.161106 master-0 kubenswrapper[4430]: I1203 14:17:35.161101 4430 scope.go:117] "RemoveContainer" containerID="39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367" Dec 03 14:17:35.161568 master-0 kubenswrapper[4430]: I1203 14:17:35.161513 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367"} err="failed to get container status \"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367\": rpc error: code = NotFound desc = could not find container \"39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367\": container with ID starting with 39b194724b77345b65317f5d17b71ea1ee17ffce6a18c2b78922cb6a46386367 not found: ID does not exist" Dec 03 14:17:35.161640 master-0 kubenswrapper[4430]: I1203 14:17:35.161566 4430 scope.go:117] "RemoveContainer" containerID="963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be" Dec 03 14:17:35.162083 master-0 kubenswrapper[4430]: I1203 14:17:35.162021 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be"} err="failed to get container status \"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be\": rpc error: code = NotFound desc = could not find container \"963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be\": container with ID starting with 963060bba05fc97ba7868d1912191aac8aa0d1377feaf4f8447a4e2493c685be not found: ID does not exist" Dec 03 14:17:35.162083 master-0 kubenswrapper[4430]: I1203 14:17:35.162077 4430 scope.go:117] "RemoveContainer" containerID="5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045" Dec 03 14:17:35.162559 master-0 kubenswrapper[4430]: I1203 14:17:35.162504 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045"} err="failed to get container status \"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045\": rpc error: code = NotFound desc = could not find container \"5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045\": container with ID starting with 5f7a63a2b4be3a20059399481e35c46252d2f747a1e1b654fb036be24aea9045 not found: ID does not exist" Dec 03 14:17:35.162629 master-0 kubenswrapper[4430]: I1203 14:17:35.162556 4430 scope.go:117] "RemoveContainer" containerID="35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1" Dec 03 14:17:35.163737 master-0 kubenswrapper[4430]: I1203 14:17:35.163620 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1"} err="failed to get container status \"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1\": rpc error: code = NotFound desc = could not find container \"35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1\": container with ID starting with 35b20e43e9a082c8a9782d4f55c367b85100beb901e30942f83d2fb790bf1fc1 not found: ID does not exist" Dec 03 14:17:35.163737 master-0 kubenswrapper[4430]: I1203 14:17:35.163671 4430 scope.go:117] "RemoveContainer" containerID="c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7" Dec 03 14:17:35.164135 master-0 kubenswrapper[4430]: I1203 14:17:35.164075 4430 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7"} err="failed to get container status \"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7\": rpc error: code = NotFound desc = could not find container \"c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7\": container with ID starting with c9e0f6c4fba7b746fb0ab51cda73a08bf5fc58a8df5f3bbd8cd5ce4137d6eea7 not found: ID does not exist" Dec 03 14:17:35.426041 master-0 kubenswrapper[4430]: E1203 14:17:35.425818 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:17:35.596668 master-0 kubenswrapper[4430]: I1203 14:17:35.596587 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf07eb54db570834b7c9a90b6b07403" path="/var/lib/kubelet/pods/ebf07eb54db570834b7c9a90b6b07403/volumes" Dec 03 14:17:35.945231 master-0 kubenswrapper[4430]: I1203 14:17:35.945123 4430 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:17:38.286962 master-0 kubenswrapper[4430]: E1203 14:17:38.286742 4430 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.187dba429d12c56b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:ebf07eb54db570834b7c9a90b6b07403,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:17:04.264963435 +0000 UTC m=+524.887877521,LastTimestamp:2025-12-03 14:17:04.264963435 +0000 UTC m=+524.887877521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:17:38.583738 master-0 kubenswrapper[4430]: I1203 14:17:38.583554 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:17:38.621524 master-0 kubenswrapper[4430]: I1203 14:17:38.621388 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:17:38.621524 master-0 kubenswrapper[4430]: I1203 14:17:38.621526 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:17:45.426715 master-0 kubenswrapper[4430]: E1203 14:17:45.426590 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:17:55.100618 master-0 kubenswrapper[4430]: I1203 14:17:55.100570 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/1.log" Dec 03 14:17:55.102782 master-0 kubenswrapper[4430]: I1203 14:17:55.101449 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1" exitCode=1 Dec 03 14:17:55.102782 master-0 kubenswrapper[4430]: I1203 14:17:55.101525 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1"} Dec 03 14:17:55.102782 master-0 kubenswrapper[4430]: I1203 14:17:55.102453 4430 scope.go:117] "RemoveContainer" containerID="a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1" Dec 03 14:17:55.427845 master-0 kubenswrapper[4430]: E1203 14:17:55.427683 4430 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:17:55.427845 master-0 kubenswrapper[4430]: I1203 14:17:55.427752 4430 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 14:17:56.112753 master-0 kubenswrapper[4430]: I1203 14:17:56.112688 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/1.log" Dec 03 14:17:56.113706 master-0 kubenswrapper[4430]: I1203 14:17:56.113639 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c"} Dec 03 14:18:05.429197 master-0 kubenswrapper[4430]: E1203 14:18:05.428974 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Dec 03 14:18:08.203457 master-0 kubenswrapper[4430]: I1203 14:18:08.203365 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/4.log" Dec 03 14:18:08.204027 master-0 kubenswrapper[4430]: I1203 14:18:08.203732 4430 generic.go:334] "Generic (PLEG): container finished" podID="da583723-b3ad-4a6f-b586-09b739bd7f8c" containerID="0beebef07f0cead91e9334247c292ae81789441d58dee39e91d6971b5f65df56" exitCode=1 Dec 03 14:18:08.204027 master-0 kubenswrapper[4430]: I1203 14:18:08.203777 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerDied","Data":"0beebef07f0cead91e9334247c292ae81789441d58dee39e91d6971b5f65df56"} Dec 03 14:18:08.204323 master-0 kubenswrapper[4430]: I1203 14:18:08.204294 4430 scope.go:117] "RemoveContainer" containerID="0beebef07f0cead91e9334247c292ae81789441d58dee39e91d6971b5f65df56" Dec 03 14:18:09.223291 master-0 kubenswrapper[4430]: I1203 14:18:09.223190 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/4.log" Dec 03 14:18:09.224366 master-0 kubenswrapper[4430]: I1203 14:18:09.223633 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"43bf3109c8eaedbe3dd590a49c242b2c73b0d2a5937982df49275d827c77658e"} Dec 03 14:18:10.933850 master-0 kubenswrapper[4430]: I1203 14:18:10.933780 4430 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Dec 03 14:18:12.290388 master-0 kubenswrapper[4430]: E1203 14:18:12.290049 4430 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.187dba429d1447cc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:ebf07eb54db570834b7c9a90b6b07403,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:17:04.265062348 +0000 UTC m=+524.887976424,LastTimestamp:2025-12-03 14:17:04.265062348 +0000 UTC m=+524.887976424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:18:12.624810 master-0 kubenswrapper[4430]: E1203 14:18:12.624666 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Dec 03 14:18:12.625350 master-0 kubenswrapper[4430]: I1203 14:18:12.625313 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:18:13.260208 master-0 kubenswrapper[4430]: I1203 14:18:13.260125 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="27ba8fc156b9ad3ed3b286b700a4581df8d607c267e734e35b06553c1cfaba0c" exitCode=0 Dec 03 14:18:13.260208 master-0 kubenswrapper[4430]: I1203 14:18:13.260184 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"27ba8fc156b9ad3ed3b286b700a4581df8d607c267e734e35b06553c1cfaba0c"} Dec 03 14:18:13.260208 master-0 kubenswrapper[4430]: I1203 14:18:13.260224 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"0482786491f2ebf464cca06dcf2eb3fdc428b15699dbb6e8f25b7e7b4e3e3435"} Dec 03 14:18:13.260843 master-0 kubenswrapper[4430]: I1203 14:18:13.260600 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:18:13.260843 master-0 kubenswrapper[4430]: I1203 14:18:13.260616 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:18:15.631202 master-0 kubenswrapper[4430]: E1203 14:18:15.631044 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Dec 03 14:18:19.979695 master-0 kubenswrapper[4430]: E1203 14:18:19.979629 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286\": container with ID starting with be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286 not found: ID does not exist" containerID="be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286" Dec 03 14:18:19.979695 master-0 kubenswrapper[4430]: I1203 14:18:19.979682 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286" err="rpc error: code = NotFound desc = could not find container \"be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286\": container with ID starting with be99b88802ab220e0f188d341a6ae8ca872bcc21b0a83fc28f9d829644c09286 not found: ID does not exist" Dec 03 14:18:19.980814 master-0 kubenswrapper[4430]: E1203 14:18:19.980207 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3\": container with ID starting with 039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3 not found: ID does not exist" containerID="039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3" Dec 03 14:18:19.980814 master-0 kubenswrapper[4430]: I1203 14:18:19.980291 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3" err="rpc error: code = NotFound desc = could not find container \"039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3\": container with ID starting with 039068af4cb3262d12c72a217404209a1874136e7d2d72b500bf40a823d372f3 not found: ID does not exist" Dec 03 14:18:19.981220 master-0 kubenswrapper[4430]: E1203 14:18:19.981163 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20\": container with ID starting with 04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20 not found: ID does not exist" containerID="04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20" Dec 03 14:18:19.981220 master-0 kubenswrapper[4430]: I1203 14:18:19.981197 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20" err="rpc error: code = NotFound desc = could not find container \"04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20\": container with ID starting with 04ea9afbbcdca16f4ce4df57584d36f333ac33dd812706e64a6a288c9d13db20 not found: ID does not exist" Dec 03 14:18:19.981733 master-0 kubenswrapper[4430]: E1203 14:18:19.981675 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1\": container with ID starting with 7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1 not found: ID does not exist" containerID="7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1" Dec 03 14:18:19.981847 master-0 kubenswrapper[4430]: I1203 14:18:19.981737 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1" err="rpc error: code = NotFound desc = could not find container \"7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1\": container with ID starting with 7b80ee3df0c2e471f09af463df3a386c3046b2a0e1173438e0a79d2656bbe1a1 not found: ID does not exist" Dec 03 14:18:19.982488 master-0 kubenswrapper[4430]: E1203 14:18:19.982386 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92\": container with ID starting with 5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92 not found: ID does not exist" containerID="5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92" Dec 03 14:18:19.982488 master-0 kubenswrapper[4430]: I1203 14:18:19.982479 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92" err="rpc error: code = NotFound desc = could not find container \"5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92\": container with ID starting with 5c7c0dc33acf43d713f58b76101c6ef80dba9249d62b15bdc056e4ad04fa3e92 not found: ID does not exist" Dec 03 14:18:19.983168 master-0 kubenswrapper[4430]: E1203 14:18:19.983027 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b\": container with ID starting with 8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b not found: ID does not exist" containerID="8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b" Dec 03 14:18:19.983168 master-0 kubenswrapper[4430]: I1203 14:18:19.983085 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b" err="rpc error: code = NotFound desc = could not find container \"8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b\": container with ID starting with 8f8bcb229dee281c6fc29c1db98c8691d69696c3625d8664573f82ecbc2aaf0b not found: ID does not exist" Dec 03 14:18:19.983676 master-0 kubenswrapper[4430]: E1203 14:18:19.983610 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768\": container with ID starting with d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768 not found: ID does not exist" containerID="d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768" Dec 03 14:18:19.983799 master-0 kubenswrapper[4430]: I1203 14:18:19.983672 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768" err="rpc error: code = NotFound desc = could not find container \"d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768\": container with ID starting with d2fc71099171002f51c2ed0100c0ad45bbfb4048bcd0e4680597f94d0eb84768 not found: ID does not exist" Dec 03 14:18:19.984147 master-0 kubenswrapper[4430]: E1203 14:18:19.984091 4430 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589\": container with ID starting with 0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589 not found: ID does not exist" containerID="0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589" Dec 03 14:18:19.984147 master-0 kubenswrapper[4430]: I1203 14:18:19.984123 4430 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589" err="rpc error: code = NotFound desc = could not find container \"0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589\": container with ID starting with 0c1a715c6036734e270de9063bb8e75721f0c22823fb7728178b3a6a2d5b1589 not found: ID does not exist" Dec 03 14:18:26.033266 master-0 kubenswrapper[4430]: E1203 14:18:26.033053 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Dec 03 14:18:36.836745 master-0 kubenswrapper[4430]: E1203 14:18:36.836557 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="1.6s" Dec 03 14:18:41.194206 master-0 kubenswrapper[4430]: E1203 14:18:41.194091 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:18:31Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:18:31Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:18:31Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:18:31Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:18:46.293215 master-0 kubenswrapper[4430]: E1203 14:18:46.293039 4430 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.187dba429d14378c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:ebf07eb54db570834b7c9a90b6b07403,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Killing,Message:Stopping container etcd-metrics,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:17:04.265058188 +0000 UTC m=+524.887972254,LastTimestamp:2025-12-03 14:17:04.265058188 +0000 UTC m=+524.887972254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:18:47.263187 master-0 kubenswrapper[4430]: E1203 14:18:47.263132 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Dec 03 14:18:47.643743 master-0 kubenswrapper[4430]: I1203 14:18:47.643641 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="ce8a315a55455dcd2bd89fa32028c270083d4320da70822198130d5032317a0d" exitCode=0 Dec 03 14:18:47.643743 master-0 kubenswrapper[4430]: I1203 14:18:47.643714 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"ce8a315a55455dcd2bd89fa32028c270083d4320da70822198130d5032317a0d"} Dec 03 14:18:47.644811 master-0 kubenswrapper[4430]: I1203 14:18:47.644381 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:18:47.644811 master-0 kubenswrapper[4430]: I1203 14:18:47.644438 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:18:48.438259 master-0 kubenswrapper[4430]: E1203 14:18:48.438142 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 03 14:18:51.195628 master-0 kubenswrapper[4430]: E1203 14:18:51.195529 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:19:00.752450 master-0 kubenswrapper[4430]: I1203 14:19:00.752388 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/cluster-cloud-controller-manager/2.log" Dec 03 14:19:00.753024 master-0 kubenswrapper[4430]: I1203 14:19:00.752557 4430 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="38d3b50b66712e81beadff3b6029073280e3d8729325473fba1be3f14896eace" exitCode=1 Dec 03 14:19:00.753024 master-0 kubenswrapper[4430]: I1203 14:19:00.752595 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"38d3b50b66712e81beadff3b6029073280e3d8729325473fba1be3f14896eace"} Dec 03 14:19:00.753376 master-0 kubenswrapper[4430]: I1203 14:19:00.753346 4430 scope.go:117] "RemoveContainer" containerID="38d3b50b66712e81beadff3b6029073280e3d8729325473fba1be3f14896eace" Dec 03 14:19:01.196436 master-0 kubenswrapper[4430]: E1203 14:19:01.196384 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:19:01.640647 master-0 kubenswrapper[4430]: E1203 14:19:01.639681 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 03 14:19:01.765124 master-0 kubenswrapper[4430]: I1203 14:19:01.765052 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/cluster-cloud-controller-manager/2.log" Dec 03 14:19:01.765124 master-0 kubenswrapper[4430]: I1203 14:19:01.765139 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"8042d58d3159be0fcc39c083d9468beb138a750ed338d5aa4389b51a68544c23"} Dec 03 14:19:03.361675 master-0 kubenswrapper[4430]: I1203 14:19:03.361121 4430 patch_prober.go:28] interesting pod/catalogd-controller-manager-754cfd84-qf898 container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.33:8081/readyz\": dial tcp 10.128.0.33:8081: connect: connection refused" start-of-body= Dec 03 14:19:03.361675 master-0 kubenswrapper[4430]: I1203 14:19:03.361213 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.33:8081/readyz\": dial tcp 10.128.0.33:8081: connect: connection refused" Dec 03 14:19:03.780240 master-0 kubenswrapper[4430]: I1203 14:19:03.780185 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/1.log" Dec 03 14:19:03.780705 master-0 kubenswrapper[4430]: I1203 14:19:03.780662 4430 generic.go:334] "Generic (PLEG): container finished" podID="69b752ed-691c-4574-a01e-428d4bf85b75" containerID="defde12eea6161668211d878a862d3062187d5a0fd68e127f046adf8e6fb307b" exitCode=1 Dec 03 14:19:03.780791 master-0 kubenswrapper[4430]: I1203 14:19:03.780740 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerDied","Data":"defde12eea6161668211d878a862d3062187d5a0fd68e127f046adf8e6fb307b"} Dec 03 14:19:03.781388 master-0 kubenswrapper[4430]: I1203 14:19:03.781362 4430 scope.go:117] "RemoveContainer" containerID="defde12eea6161668211d878a862d3062187d5a0fd68e127f046adf8e6fb307b" Dec 03 14:19:03.782569 master-0 kubenswrapper[4430]: I1203 14:19:03.782223 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/2.log" Dec 03 14:19:03.782569 master-0 kubenswrapper[4430]: I1203 14:19:03.782307 4430 generic.go:334] "Generic (PLEG): container finished" podID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" containerID="1e0f5c758fef44a1cf56d65a61567af61aa3dbbf05960724b8800db96778ab2e" exitCode=1 Dec 03 14:19:03.782569 master-0 kubenswrapper[4430]: I1203 14:19:03.782352 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerDied","Data":"1e0f5c758fef44a1cf56d65a61567af61aa3dbbf05960724b8800db96778ab2e"} Dec 03 14:19:03.783103 master-0 kubenswrapper[4430]: I1203 14:19:03.783064 4430 scope.go:117] "RemoveContainer" containerID="1e0f5c758fef44a1cf56d65a61567af61aa3dbbf05960724b8800db96778ab2e" Dec 03 14:19:04.794684 master-0 kubenswrapper[4430]: I1203 14:19:04.794615 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/1.log" Dec 03 14:19:04.795409 master-0 kubenswrapper[4430]: I1203 14:19:04.795380 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"dc551c9a615037c7c7b838a40e2ff9662582fd1a85bcf96c80ad2921e9fffc09"} Dec 03 14:19:04.795741 master-0 kubenswrapper[4430]: I1203 14:19:04.795721 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:19:04.797754 master-0 kubenswrapper[4430]: I1203 14:19:04.797740 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/2.log" Dec 03 14:19:04.797925 master-0 kubenswrapper[4430]: I1203 14:19:04.797910 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6"} Dec 03 14:19:05.812623 master-0 kubenswrapper[4430]: I1203 14:19:05.812530 4430 generic.go:334] "Generic (PLEG): container finished" podID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" containerID="868b4cdc8a4fe91b5ca34a18b1d879aa41665f52be1f78b8a23f6bad9d2f2106" exitCode=0 Dec 03 14:19:05.813706 master-0 kubenswrapper[4430]: I1203 14:19:05.813563 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerDied","Data":"868b4cdc8a4fe91b5ca34a18b1d879aa41665f52be1f78b8a23f6bad9d2f2106"} Dec 03 14:19:05.813980 master-0 kubenswrapper[4430]: I1203 14:19:05.813934 4430 scope.go:117] "RemoveContainer" containerID="868b4cdc8a4fe91b5ca34a18b1d879aa41665f52be1f78b8a23f6bad9d2f2106" Dec 03 14:19:06.838923 master-0 kubenswrapper[4430]: I1203 14:19:06.838851 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"cd8eb198153632e60f2bbfce7d2c66f19dc3d5736772e8270e3481e312a5891e"} Dec 03 14:19:06.839690 master-0 kubenswrapper[4430]: I1203 14:19:06.839203 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:19:06.841259 master-0 kubenswrapper[4430]: I1203 14:19:06.841209 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:19:10.941347 master-0 kubenswrapper[4430]: I1203 14:19:10.941271 4430 status_manager.go:851] "Failed to get status for pod" podUID="56649bd4-ac30-4a70-8024-772294fede88" pod="openshift-monitoring/prometheus-k8s-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0)" Dec 03 14:19:11.197826 master-0 kubenswrapper[4430]: E1203 14:19:11.197661 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:19:11.881711 master-0 kubenswrapper[4430]: I1203 14:19:11.881596 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-5f78c89466-bshxw_82bd0ae5-b35d-47c8-b693-b27a9a56476d/manager/2.log" Dec 03 14:19:11.881711 master-0 kubenswrapper[4430]: I1203 14:19:11.881669 4430 generic.go:334] "Generic (PLEG): container finished" podID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" containerID="c6a1bd456199320e9417bea0616af7659530d807a4c92a44ce75df1bc574d0ea" exitCode=1 Dec 03 14:19:11.881711 master-0 kubenswrapper[4430]: I1203 14:19:11.881709 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerDied","Data":"c6a1bd456199320e9417bea0616af7659530d807a4c92a44ce75df1bc574d0ea"} Dec 03 14:19:11.883107 master-0 kubenswrapper[4430]: I1203 14:19:11.882386 4430 scope.go:117] "RemoveContainer" containerID="c6a1bd456199320e9417bea0616af7659530d807a4c92a44ce75df1bc574d0ea" Dec 03 14:19:12.892148 master-0 kubenswrapper[4430]: I1203 14:19:12.892032 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-5f78c89466-bshxw_82bd0ae5-b35d-47c8-b693-b27a9a56476d/manager/2.log" Dec 03 14:19:12.892148 master-0 kubenswrapper[4430]: I1203 14:19:12.892164 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"b6bbf72a84092f50bd40fe6e46d6311742d89a9cef83ffde53267283ea90b6f9"} Dec 03 14:19:12.892744 master-0 kubenswrapper[4430]: I1203 14:19:12.892551 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:19:12.894709 master-0 kubenswrapper[4430]: I1203 14:19:12.894680 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/config-sync-controllers/2.log" Dec 03 14:19:12.895192 master-0 kubenswrapper[4430]: I1203 14:19:12.895161 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/cluster-cloud-controller-manager/2.log" Dec 03 14:19:12.895290 master-0 kubenswrapper[4430]: I1203 14:19:12.895207 4430 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="205a91247174fbcd49caa70233b8561e1b597ab7d8471618046e45f8d26ee607" exitCode=1 Dec 03 14:19:12.895290 master-0 kubenswrapper[4430]: I1203 14:19:12.895234 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"205a91247174fbcd49caa70233b8561e1b597ab7d8471618046e45f8d26ee607"} Dec 03 14:19:12.895631 master-0 kubenswrapper[4430]: I1203 14:19:12.895596 4430 scope.go:117] "RemoveContainer" containerID="205a91247174fbcd49caa70233b8561e1b597ab7d8471618046e45f8d26ee607" Dec 03 14:19:13.363960 master-0 kubenswrapper[4430]: I1203 14:19:13.363888 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:19:13.910450 master-0 kubenswrapper[4430]: I1203 14:19:13.910341 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/config-sync-controllers/2.log" Dec 03 14:19:13.911486 master-0 kubenswrapper[4430]: I1203 14:19:13.911078 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/cluster-cloud-controller-manager/2.log" Dec 03 14:19:13.912609 master-0 kubenswrapper[4430]: I1203 14:19:13.912543 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"d523e57877f21ef3a75f49049c5491db001e1951e8c205f8a24f1c3ad8c18bfc"} Dec 03 14:19:18.042245 master-0 kubenswrapper[4430]: E1203 14:19:18.042096 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Dec 03 14:19:20.296921 master-0 kubenswrapper[4430]: E1203 14:19:20.296691 4430 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.187dba429d164c9c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:ebf07eb54db570834b7c9a90b6b07403,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:17:04.265194652 +0000 UTC m=+524.888108728,LastTimestamp:2025-12-03 14:17:04.265194652 +0000 UTC m=+524.888108728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:19:21.198594 master-0 kubenswrapper[4430]: E1203 14:19:21.198370 4430 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 03 14:19:21.198594 master-0 kubenswrapper[4430]: E1203 14:19:21.198536 4430 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 14:19:21.648142 master-0 kubenswrapper[4430]: E1203 14:19:21.648054 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Dec 03 14:19:22.995178 master-0 kubenswrapper[4430]: I1203 14:19:22.995091 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="9c4fd5314c75c2c43e3aa735f4fb7297bc185515c12662b1ae624562343b1716" exitCode=0 Dec 03 14:19:22.996660 master-0 kubenswrapper[4430]: I1203 14:19:22.995166 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"9c4fd5314c75c2c43e3aa735f4fb7297bc185515c12662b1ae624562343b1716"} Dec 03 14:19:22.996660 master-0 kubenswrapper[4430]: I1203 14:19:22.995812 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:19:22.996660 master-0 kubenswrapper[4430]: I1203 14:19:22.995833 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:19:25.120236 master-0 kubenswrapper[4430]: I1203 14:19:25.120174 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:19:29.049709 master-0 kubenswrapper[4430]: I1203 14:19:29.048679 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-66f4cc99d4-x278n_ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/control-plane-machine-set-operator/1.log" Dec 03 14:19:29.049709 master-0 kubenswrapper[4430]: I1203 14:19:29.048756 4430 generic.go:334] "Generic (PLEG): container finished" podID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" containerID="664c3ff2947feb1a70318c5ff0027e09769155ce2375556704dbb4dae528edde" exitCode=1 Dec 03 14:19:29.049709 master-0 kubenswrapper[4430]: I1203 14:19:29.048800 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerDied","Data":"664c3ff2947feb1a70318c5ff0027e09769155ce2375556704dbb4dae528edde"} Dec 03 14:19:29.049709 master-0 kubenswrapper[4430]: I1203 14:19:29.049531 4430 scope.go:117] "RemoveContainer" containerID="664c3ff2947feb1a70318c5ff0027e09769155ce2375556704dbb4dae528edde" Dec 03 14:19:30.055904 master-0 kubenswrapper[4430]: I1203 14:19:30.055856 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-66f4cc99d4-x278n_ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/control-plane-machine-set-operator/1.log" Dec 03 14:19:30.056520 master-0 kubenswrapper[4430]: I1203 14:19:30.055949 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"302044d7c6d769d45230c8541ec0a0346f7987ab87dc02466826c5647a15464c"} Dec 03 14:19:31.063777 master-0 kubenswrapper[4430]: I1203 14:19:31.063657 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/3.log" Dec 03 14:19:31.063777 master-0 kubenswrapper[4430]: I1203 14:19:31.063712 4430 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="f4124a45c982dd6ad8408966f456c2e401e595751b2120febf21b52aa6853950" exitCode=1 Dec 03 14:19:31.063777 master-0 kubenswrapper[4430]: I1203 14:19:31.063747 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"f4124a45c982dd6ad8408966f456c2e401e595751b2120febf21b52aa6853950"} Dec 03 14:19:31.064376 master-0 kubenswrapper[4430]: I1203 14:19:31.064278 4430 scope.go:117] "RemoveContainer" containerID="f4124a45c982dd6ad8408966f456c2e401e595751b2120febf21b52aa6853950" Dec 03 14:19:32.070804 master-0 kubenswrapper[4430]: I1203 14:19:32.070752 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/3.log" Dec 03 14:19:32.071491 master-0 kubenswrapper[4430]: I1203 14:19:32.070839 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1"} Dec 03 14:19:34.094312 master-0 kubenswrapper[4430]: I1203 14:19:34.094211 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/3.log" Dec 03 14:19:34.095574 master-0 kubenswrapper[4430]: I1203 14:19:34.094923 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/2.log" Dec 03 14:19:34.095574 master-0 kubenswrapper[4430]: I1203 14:19:34.094986 4430 generic.go:334] "Generic (PLEG): container finished" podID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" containerID="82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6" exitCode=1 Dec 03 14:19:34.095574 master-0 kubenswrapper[4430]: I1203 14:19:34.095054 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerDied","Data":"82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6"} Dec 03 14:19:34.095574 master-0 kubenswrapper[4430]: I1203 14:19:34.095113 4430 scope.go:117] "RemoveContainer" containerID="1e0f5c758fef44a1cf56d65a61567af61aa3dbbf05960724b8800db96778ab2e" Dec 03 14:19:34.096263 master-0 kubenswrapper[4430]: I1203 14:19:34.096212 4430 scope.go:117] "RemoveContainer" containerID="82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6" Dec 03 14:19:34.096892 master-0 kubenswrapper[4430]: E1203 14:19:34.096616 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-86897dd478-qqwh7_openshift-cluster-storage-operator(63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:19:34.097871 master-0 kubenswrapper[4430]: I1203 14:19:34.097849 4430 generic.go:334] "Generic (PLEG): container finished" podID="6935a3f8-723e-46e6-8498-483f34bf0825" containerID="08acd077553f72d39a3338430ca8c622c61126e0810d50f76c2ab4bda2d6067f" exitCode=0 Dec 03 14:19:34.097956 master-0 kubenswrapper[4430]: I1203 14:19:34.097876 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerDied","Data":"08acd077553f72d39a3338430ca8c622c61126e0810d50f76c2ab4bda2d6067f"} Dec 03 14:19:34.098683 master-0 kubenswrapper[4430]: I1203 14:19:34.098660 4430 scope.go:117] "RemoveContainer" containerID="08acd077553f72d39a3338430ca8c622c61126e0810d50f76c2ab4bda2d6067f" Dec 03 14:19:35.050683 master-0 kubenswrapper[4430]: E1203 14:19:35.050591 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Dec 03 14:19:35.108633 master-0 kubenswrapper[4430]: I1203 14:19:35.108560 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/3.log" Dec 03 14:19:35.112116 master-0 kubenswrapper[4430]: I1203 14:19:35.112067 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/1.log" Dec 03 14:19:35.112723 master-0 kubenswrapper[4430]: I1203 14:19:35.112696 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler/2.log" Dec 03 14:19:35.113150 master-0 kubenswrapper[4430]: I1203 14:19:35.113111 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12" exitCode=1 Dec 03 14:19:35.113226 master-0 kubenswrapper[4430]: I1203 14:19:35.113196 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12"} Dec 03 14:19:35.114123 master-0 kubenswrapper[4430]: I1203 14:19:35.114084 4430 scope.go:117] "RemoveContainer" containerID="83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12" Dec 03 14:19:35.116543 master-0 kubenswrapper[4430]: I1203 14:19:35.116494 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"7c9574a6b0f070d680facdbd7dd3bbc55a0dc6621922d174cc33aa4cf83d151f"} Dec 03 14:19:35.519568 master-0 kubenswrapper[4430]: I1203 14:19:35.519475 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:19:36.126757 master-0 kubenswrapper[4430]: I1203 14:19:36.126719 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/1.log" Dec 03 14:19:36.128893 master-0 kubenswrapper[4430]: I1203 14:19:36.128836 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler/2.log" Dec 03 14:19:36.129381 master-0 kubenswrapper[4430]: I1203 14:19:36.129354 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4"} Dec 03 14:19:36.129830 master-0 kubenswrapper[4430]: I1203 14:19:36.129780 4430 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" containerID="cri-o://83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12" Dec 03 14:19:36.129908 master-0 kubenswrapper[4430]: I1203 14:19:36.129854 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:19:37.323387 master-0 kubenswrapper[4430]: I1203 14:19:37.323312 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:19:44.388904 master-0 kubenswrapper[4430]: I1203 14:19:44.388845 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/3.log" Dec 03 14:19:44.390391 master-0 kubenswrapper[4430]: I1203 14:19:44.389463 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/2.log" Dec 03 14:19:44.390391 master-0 kubenswrapper[4430]: I1203 14:19:44.389903 4430 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="068ddb4d161d39aa30c2725ce031626c21271c908564c6ab6d59dc24ea4c3c49" exitCode=255 Dec 03 14:19:44.390391 master-0 kubenswrapper[4430]: I1203 14:19:44.389985 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"068ddb4d161d39aa30c2725ce031626c21271c908564c6ab6d59dc24ea4c3c49"} Dec 03 14:19:44.390391 master-0 kubenswrapper[4430]: I1203 14:19:44.390105 4430 scope.go:117] "RemoveContainer" containerID="91c459125c51bbf21f0e3ee77e69ce6d33befa01877a485335f7af3fba87e31e" Dec 03 14:19:44.391286 master-0 kubenswrapper[4430]: I1203 14:19:44.391254 4430 scope.go:117] "RemoveContainer" containerID="068ddb4d161d39aa30c2725ce031626c21271c908564c6ab6d59dc24ea4c3c49" Dec 03 14:19:45.401460 master-0 kubenswrapper[4430]: I1203 14:19:45.401363 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/3.log" Dec 03 14:19:45.402644 master-0 kubenswrapper[4430]: I1203 14:19:45.401909 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"f13652d4628e63bd07a9fa5dbb77c63632f6698856c14038338f8bbd246f113a"} Dec 03 14:19:47.238667 master-0 kubenswrapper[4430]: I1203 14:19:47.238580 4430 patch_prober.go:28] interesting pod/controller-manager-7d7ddcf759-pvkrm container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.34:8443/healthz\": dial tcp 10.128.0.34:8443: connect: connection refused" start-of-body= Dec 03 14:19:47.239171 master-0 kubenswrapper[4430]: I1203 14:19:47.238689 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.34:8443/healthz\": dial tcp 10.128.0.34:8443: connect: connection refused" Dec 03 14:19:47.239171 master-0 kubenswrapper[4430]: I1203 14:19:47.238924 4430 patch_prober.go:28] interesting pod/controller-manager-7d7ddcf759-pvkrm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.34:8443/healthz\": dial tcp 10.128.0.34:8443: connect: connection refused" start-of-body= Dec 03 14:19:47.239171 master-0 kubenswrapper[4430]: I1203 14:19:47.239041 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.34:8443/healthz\": dial tcp 10.128.0.34:8443: connect: connection refused" Dec 03 14:19:47.422341 master-0 kubenswrapper[4430]: I1203 14:19:47.422161 4430 generic.go:334] "Generic (PLEG): container finished" podID="e89bc996-818b-46b9-ad39-a12457acd4bb" containerID="778a7e26cb5ad94e90459f8aaf6d567b6415a8e54be0dc590a3189cec8d79a9c" exitCode=0 Dec 03 14:19:47.422652 master-0 kubenswrapper[4430]: I1203 14:19:47.422295 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerDied","Data":"778a7e26cb5ad94e90459f8aaf6d567b6415a8e54be0dc590a3189cec8d79a9c"} Dec 03 14:19:47.423411 master-0 kubenswrapper[4430]: I1203 14:19:47.423374 4430 scope.go:117] "RemoveContainer" containerID="778a7e26cb5ad94e90459f8aaf6d567b6415a8e54be0dc590a3189cec8d79a9c" Dec 03 14:19:47.584853 master-0 kubenswrapper[4430]: I1203 14:19:47.584777 4430 scope.go:117] "RemoveContainer" containerID="82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6" Dec 03 14:19:48.433069 master-0 kubenswrapper[4430]: I1203 14:19:48.432915 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerStarted","Data":"6ea3061d4c54dfaae6ea2f65c8136df957511b3df2abe8d033135b6acbdfef4e"} Dec 03 14:19:48.434098 master-0 kubenswrapper[4430]: I1203 14:19:48.433444 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:19:48.435590 master-0 kubenswrapper[4430]: I1203 14:19:48.435528 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/3.log" Dec 03 14:19:48.435838 master-0 kubenswrapper[4430]: I1203 14:19:48.435615 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8"} Dec 03 14:19:48.439761 master-0 kubenswrapper[4430]: I1203 14:19:48.439689 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:19:49.458732 master-0 kubenswrapper[4430]: I1203 14:19:49.458564 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="bf6199bcbec88449c79d8c0fecef851ed148f249eec05a4f4ea3ebc9e8451088" exitCode=0 Dec 03 14:19:49.458732 master-0 kubenswrapper[4430]: I1203 14:19:49.458662 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"bf6199bcbec88449c79d8c0fecef851ed148f249eec05a4f4ea3ebc9e8451088"} Dec 03 14:19:49.460634 master-0 kubenswrapper[4430]: I1203 14:19:49.460282 4430 scope.go:117] "RemoveContainer" containerID="bf6199bcbec88449c79d8c0fecef851ed148f249eec05a4f4ea3ebc9e8451088" Dec 03 14:19:50.473575 master-0 kubenswrapper[4430]: I1203 14:19:50.472748 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0"} Dec 03 14:19:50.927676 master-0 kubenswrapper[4430]: I1203 14:19:50.927476 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:19:50.927676 master-0 kubenswrapper[4430]: I1203 14:19:50.927586 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:19:52.051516 master-0 kubenswrapper[4430]: E1203 14:19:52.051088 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Dec 03 14:19:53.927974 master-0 kubenswrapper[4430]: I1203 14:19:53.927883 4430 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:19:53.927974 master-0 kubenswrapper[4430]: I1203 14:19:53.927981 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:19:54.300947 master-0 kubenswrapper[4430]: E1203 14:19:54.300735 4430 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{openshift-kube-scheduler-master-0.187dba4e735a46d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-master-0,UID:fd2fa610bb2a39c39fcdd00db03a511a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:17:55.1046141 +0000 UTC m=+575.727528176,LastTimestamp:2025-12-03 14:17:55.1046141 +0000 UTC m=+575.727528176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:19:56.999162 master-0 kubenswrapper[4430]: E1203 14:19:56.999068 4430 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Dec 03 14:19:57.532618 master-0 kubenswrapper[4430]: I1203 14:19:57.532412 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"baccffb5cc31608dde669588139e678e7dc63fed31e5fb315c287a4095296e77"} Dec 03 14:19:57.532618 master-0 kubenswrapper[4430]: I1203 14:19:57.532499 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"6b0b2af7a479ff95c38af82129209ba561c7c05d07f88baeeb860f4dce265009"} Dec 03 14:19:58.543998 master-0 kubenswrapper[4430]: I1203 14:19:58.543926 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"dc5f4e8ba796d8365c4ac4c987abb1a3246149be979ebd7c40b447b9dcbbb050"} Dec 03 14:19:58.543998 master-0 kubenswrapper[4430]: I1203 14:19:58.543991 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"b583c0349f59fc52b4ab436b3e5a0a2ea570336dd66ccb6a9e86fd60a78b1112"} Dec 03 14:19:58.543998 master-0 kubenswrapper[4430]: I1203 14:19:58.544004 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"14e2c5d81253238f8a04491442a8e356f260884f7773fa7385d995eddd71e620"} Dec 03 14:19:58.544816 master-0 kubenswrapper[4430]: I1203 14:19:58.544282 4430 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:19:58.544816 master-0 kubenswrapper[4430]: I1203 14:19:58.544299 4430 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="3822297c-8245-42f8-ad55-a4be310ba17b" Dec 03 14:20:02.625915 master-0 kubenswrapper[4430]: I1203 14:20:02.625820 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:02.625915 master-0 kubenswrapper[4430]: I1203 14:20:02.625917 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:03.927683 master-0 kubenswrapper[4430]: I1203 14:20:03.927570 4430 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:20:03.927683 master-0 kubenswrapper[4430]: I1203 14:20:03.927683 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:20:08.902951 master-0 kubenswrapper[4430]: I1203 14:20:08.902881 4430 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:08.996706 master-0 kubenswrapper[4430]: I1203 14:20:08.995600 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Dec 03 14:20:09.010199 master-0 kubenswrapper[4430]: I1203 14:20:09.010004 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Dec 03 14:20:09.048398 master-0 kubenswrapper[4430]: I1203 14:20:09.040594 4430 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:20:09.048398 master-0 kubenswrapper[4430]: I1203 14:20:09.045055 4430 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-59fc685495-qcxmz"] Dec 03 14:20:09.053115 master-0 kubenswrapper[4430]: E1203 14:20:09.053056 4430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Dec 03 14:20:09.595380 master-0 kubenswrapper[4430]: I1203 14:20:09.595305 4430 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" path="/var/lib/kubelet/pods/1566ad2b-b965-4ba3-8a8b-f93b39e732c8/volumes" Dec 03 14:20:12.648138 master-0 kubenswrapper[4430]: I1203 14:20:12.648045 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:12.665547 master-0 kubenswrapper[4430]: I1203 14:20:12.665484 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Dec 03 14:20:13.688386 master-0 kubenswrapper[4430]: I1203 14:20:13.688331 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:13.689443 master-0 kubenswrapper[4430]: E1203 14:20:13.689392 4430 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:20:13.928330 master-0 kubenswrapper[4430]: I1203 14:20:13.928235 4430 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:20:13.928705 master-0 kubenswrapper[4430]: I1203 14:20:13.928333 4430 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:20:13.928705 master-0 kubenswrapper[4430]: I1203 14:20:13.928402 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:20:13.929287 master-0 kubenswrapper[4430]: I1203 14:20:13.929236 4430 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Dec 03 14:20:13.929401 master-0 kubenswrapper[4430]: I1203 14:20:13.929351 4430 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" containerID="cri-o://3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0" gracePeriod=30 Dec 03 14:20:14.684818 master-0 kubenswrapper[4430]: I1203 14:20:14.684772 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/cluster-policy-controller/1.log" Dec 03 14:20:14.686483 master-0 kubenswrapper[4430]: I1203 14:20:14.686449 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0"} Dec 03 14:20:14.686566 master-0 kubenswrapper[4430]: I1203 14:20:14.686460 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0" exitCode=255 Dec 03 14:20:14.686566 master-0 kubenswrapper[4430]: I1203 14:20:14.686522 4430 scope.go:117] "RemoveContainer" containerID="bf6199bcbec88449c79d8c0fecef851ed148f249eec05a4f4ea3ebc9e8451088" Dec 03 14:20:16.702119 master-0 kubenswrapper[4430]: I1203 14:20:16.701948 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/cluster-policy-controller/1.log" Dec 03 14:20:16.703714 master-0 kubenswrapper[4430]: I1203 14:20:16.703628 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc"} Dec 03 14:20:16.736057 master-0 kubenswrapper[4430]: I1203 14:20:16.735968 4430 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=4.735942373 podStartE2EDuration="4.735942373s" podCreationTimestamp="2025-12-03 14:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:20:13.730761556 +0000 UTC m=+714.353675632" watchObservedRunningTime="2025-12-03 14:20:16.735942373 +0000 UTC m=+717.358856449" Dec 03 14:20:18.833857 master-0 kubenswrapper[4430]: I1203 14:20:18.833804 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/4.log" Dec 03 14:20:18.835190 master-0 kubenswrapper[4430]: I1203 14:20:18.835154 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/3.log" Dec 03 14:20:18.835275 master-0 kubenswrapper[4430]: I1203 14:20:18.835241 4430 generic.go:334] "Generic (PLEG): container finished" podID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" containerID="adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8" exitCode=1 Dec 03 14:20:18.835336 master-0 kubenswrapper[4430]: I1203 14:20:18.835313 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerDied","Data":"adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8"} Dec 03 14:20:18.835458 master-0 kubenswrapper[4430]: I1203 14:20:18.835411 4430 scope.go:117] "RemoveContainer" containerID="82d36b18de127144d61c1f6948e343dfcdc0f5f722b60d2ba29b67117385d5b6" Dec 03 14:20:18.835783 master-0 kubenswrapper[4430]: I1203 14:20:18.835762 4430 scope.go:117] "RemoveContainer" containerID="adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8" Dec 03 14:20:18.836053 master-0 kubenswrapper[4430]: E1203 14:20:18.836021 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-86897dd478-qqwh7_openshift-cluster-storage-operator(63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:20:19.844640 master-0 kubenswrapper[4430]: I1203 14:20:19.844582 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/4.log" Dec 03 14:20:20.927855 master-0 kubenswrapper[4430]: I1203 14:20:20.927785 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:20:20.927855 master-0 kubenswrapper[4430]: I1203 14:20:20.927843 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:20:20.933398 master-0 kubenswrapper[4430]: I1203 14:20:20.933357 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:20:28.401938 master-0 kubenswrapper[4430]: I1203 14:20:28.401734 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: E1203 14:20:28.402253 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f539ea7-39a7-422f-82d9-7603eede84cf" containerName="installer" Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: I1203 14:20:28.402286 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f539ea7-39a7-422f-82d9-7603eede84cf" containerName="installer" Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: E1203 14:20:28.402307 4430 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" containerName="console" Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: I1203 14:20:28.402314 4430 state_mem.go:107] "Deleted CPUSet assignment" podUID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" containerName="console" Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: I1203 14:20:28.402515 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f539ea7-39a7-422f-82d9-7603eede84cf" containerName="installer" Dec 03 14:20:28.402797 master-0 kubenswrapper[4430]: I1203 14:20:28.402531 4430 memory_manager.go:354] "RemoveStaleState removing state" podUID="1566ad2b-b965-4ba3-8a8b-f93b39e732c8" containerName="console" Dec 03 14:20:28.403328 master-0 kubenswrapper[4430]: I1203 14:20:28.403297 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.406505 master-0 kubenswrapper[4430]: I1203 14:20:28.405951 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7ctx2" Dec 03 14:20:28.406505 master-0 kubenswrapper[4430]: I1203 14:20:28.406402 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 14:20:28.450228 master-0 kubenswrapper[4430]: I1203 14:20:28.450125 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.450545 master-0 kubenswrapper[4430]: I1203 14:20:28.450251 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.450545 master-0 kubenswrapper[4430]: I1203 14:20:28.450401 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.493706 master-0 kubenswrapper[4430]: I1203 14:20:28.493649 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:20:28.498764 master-0 kubenswrapper[4430]: I1203 14:20:28.498712 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Dec 03 14:20:28.552273 master-0 kubenswrapper[4430]: I1203 14:20:28.552216 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.552627 master-0 kubenswrapper[4430]: I1203 14:20:28.552604 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.552808 master-0 kubenswrapper[4430]: I1203 14:20:28.552691 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.552968 master-0 kubenswrapper[4430]: I1203 14:20:28.552947 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:28.553146 master-0 kubenswrapper[4430]: I1203 14:20:28.553033 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:20:30.933469 master-0 kubenswrapper[4430]: I1203 14:20:30.933402 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:20:31.970385 master-0 kubenswrapper[4430]: I1203 14:20:31.970340 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/4.log" Dec 03 14:20:31.971272 master-0 kubenswrapper[4430]: I1203 14:20:31.971239 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/3.log" Dec 03 14:20:31.971334 master-0 kubenswrapper[4430]: I1203 14:20:31.971290 4430 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1" exitCode=1 Dec 03 14:20:31.971387 master-0 kubenswrapper[4430]: I1203 14:20:31.971329 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1"} Dec 03 14:20:31.971387 master-0 kubenswrapper[4430]: I1203 14:20:31.971380 4430 scope.go:117] "RemoveContainer" containerID="f4124a45c982dd6ad8408966f456c2e401e595751b2120febf21b52aa6853950" Dec 03 14:20:31.972251 master-0 kubenswrapper[4430]: I1203 14:20:31.972208 4430 scope.go:117] "RemoveContainer" containerID="7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1" Dec 03 14:20:31.973010 master-0 kubenswrapper[4430]: E1203 14:20:31.972627 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5fdc576499-j2n8j_openshift-machine-api(690d1f81-7b1f-4fd0-9b6e-154c9687c744)\"" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:20:33.584978 master-0 kubenswrapper[4430]: I1203 14:20:33.584903 4430 scope.go:117] "RemoveContainer" containerID="adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8" Dec 03 14:20:33.585599 master-0 kubenswrapper[4430]: E1203 14:20:33.585186 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-86897dd478-qqwh7_openshift-cluster-storage-operator(63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:20:34.011715 master-0 kubenswrapper[4430]: I1203 14:20:34.011654 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/4.log" Dec 03 14:20:37.517261 master-0 kubenswrapper[4430]: I1203 14:20:37.517211 4430 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Dec 03 14:20:37.519661 master-0 kubenswrapper[4430]: I1203 14:20:37.519642 4430 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.522289 master-0 kubenswrapper[4430]: I1203 14:20:37.522259 4430 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:20:37.524044 master-0 kubenswrapper[4430]: I1203 14:20:37.524027 4430 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:20:37.534135 master-0 kubenswrapper[4430]: I1203 14:20:37.534079 4430 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Dec 03 14:20:37.550369 master-0 kubenswrapper[4430]: I1203 14:20:37.546983 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.550369 master-0 kubenswrapper[4430]: I1203 14:20:37.547188 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.550369 master-0 kubenswrapper[4430]: I1203 14:20:37.547237 4430 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.648752 master-0 kubenswrapper[4430]: I1203 14:20:37.648661 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.649124 master-0 kubenswrapper[4430]: I1203 14:20:37.649089 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.649172 master-0 kubenswrapper[4430]: I1203 14:20:37.649137 4430 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.649375 master-0 kubenswrapper[4430]: I1203 14:20:37.649331 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:37.649450 master-0 kubenswrapper[4430]: I1203 14:20:37.649395 4430 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:20:43.585799 master-0 kubenswrapper[4430]: I1203 14:20:43.585602 4430 scope.go:117] "RemoveContainer" containerID="7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1" Dec 03 14:20:46.584601 master-0 kubenswrapper[4430]: I1203 14:20:46.584463 4430 scope.go:117] "RemoveContainer" containerID="adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8" Dec 03 14:20:48.310267 master-0 kubenswrapper[4430]: I1203 14:20:48.310204 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/4.log" Dec 03 14:20:48.310906 master-0 kubenswrapper[4430]: I1203 14:20:48.310665 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"0c52434e813a20a1a4ecd0980a8da617dc528ac101149a794f0f4e66aaba1148"} Dec 03 14:20:48.313095 master-0 kubenswrapper[4430]: I1203 14:20:48.313060 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/4.log" Dec 03 14:20:48.313168 master-0 kubenswrapper[4430]: I1203 14:20:48.313115 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"e2fa399cca28278caf8b044e9c78793296cf9c5e629fca1475d458bec63e78db"} Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.375143 4430 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="6781555008cceeeafa864fdc3e6eecbe93001626b5e30a88f394829d975fd631" exitCode=0 Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.375196 4430 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="b3ca94a978df3ec904f71631e19f006c6a70b016095b3c0dca3c4a7f1e79fe33" exitCode=0 Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.375266 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"6781555008cceeeafa864fdc3e6eecbe93001626b5e30a88f394829d975fd631"} Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.375309 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"b3ca94a978df3ec904f71631e19f006c6a70b016095b3c0dca3c4a7f1e79fe33"} Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.375333 4430 scope.go:117] "RemoveContainer" containerID="6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.376149 4430 scope.go:117] "RemoveContainer" containerID="6781555008cceeeafa864fdc3e6eecbe93001626b5e30a88f394829d975fd631" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.376187 4430 scope.go:117] "RemoveContainer" containerID="b3ca94a978df3ec904f71631e19f006c6a70b016095b3c0dca3c4a7f1e79fe33" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.389922 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/2.log" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.393046 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/1.log" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.393807 4430 generic.go:334] "Generic (PLEG): container finished" podID="69b752ed-691c-4574-a01e-428d4bf85b75" containerID="dc551c9a615037c7c7b838a40e2ff9662582fd1a85bcf96c80ad2921e9fffc09" exitCode=1 Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.393914 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerDied","Data":"dc551c9a615037c7c7b838a40e2ff9662582fd1a85bcf96c80ad2921e9fffc09"} Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: I1203 14:20:52.394752 4430 scope.go:117] "RemoveContainer" containerID="dc551c9a615037c7c7b838a40e2ff9662582fd1a85bcf96c80ad2921e9fffc09" Dec 03 14:20:52.396094 master-0 kubenswrapper[4430]: E1203 14:20:52.394980 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-754cfd84-qf898_openshift-catalogd(69b752ed-691c-4574-a01e-428d4bf85b75)\"" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:20:52.404812 master-0 kubenswrapper[4430]: I1203 14:20:52.403056 4430 generic.go:334] "Generic (PLEG): container finished" podID="7663a25e-236d-4b1d-83ce-733ab146dee3" containerID="2fe126435264098876b7af1b0b1c3cc93f24899e226ee4a6f79667cf9a3e7b3d" exitCode=0 Dec 03 14:20:52.404812 master-0 kubenswrapper[4430]: I1203 14:20:52.403166 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerDied","Data":"2fe126435264098876b7af1b0b1c3cc93f24899e226ee4a6f79667cf9a3e7b3d"} Dec 03 14:20:52.404812 master-0 kubenswrapper[4430]: I1203 14:20:52.404033 4430 scope.go:117] "RemoveContainer" containerID="2fe126435264098876b7af1b0b1c3cc93f24899e226ee4a6f79667cf9a3e7b3d" Dec 03 14:20:52.412193 master-0 kubenswrapper[4430]: I1203 14:20:52.411166 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-metrics/0.log" Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.420026 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="14e2c5d81253238f8a04491442a8e356f260884f7773fa7385d995eddd71e620" exitCode=2 Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.420066 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="6b0b2af7a479ff95c38af82129209ba561c7c05d07f88baeeb860f4dce265009" exitCode=0 Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.420141 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"14e2c5d81253238f8a04491442a8e356f260884f7773fa7385d995eddd71e620"} Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.420184 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"6b0b2af7a479ff95c38af82129209ba561c7c05d07f88baeeb860f4dce265009"} Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.421877 4430 scope.go:117] "RemoveContainer" containerID="6b0b2af7a479ff95c38af82129209ba561c7c05d07f88baeeb860f4dce265009" Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.422005 4430 scope.go:117] "RemoveContainer" containerID="14e2c5d81253238f8a04491442a8e356f260884f7773fa7385d995eddd71e620" Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.431132 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-pvrfs_eecc43f5-708f-4395-98cc-696b243d6321/machine-config-server/3.log" Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.431192 4430 generic.go:334] "Generic (PLEG): container finished" podID="eecc43f5-708f-4395-98cc-696b243d6321" containerID="5fd119858007e4b5a1b4112671b0b0fdba132ce8265b36ea78a8e9fea5aa487a" exitCode=2 Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.431328 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerDied","Data":"5fd119858007e4b5a1b4112671b0b0fdba132ce8265b36ea78a8e9fea5aa487a"} Dec 03 14:20:52.432116 master-0 kubenswrapper[4430]: I1203 14:20:52.432039 4430 scope.go:117] "RemoveContainer" containerID="5fd119858007e4b5a1b4112671b0b0fdba132ce8265b36ea78a8e9fea5aa487a" Dec 03 14:20:52.449611 master-0 kubenswrapper[4430]: I1203 14:20:52.448058 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/cluster-policy-controller/1.log" Dec 03 14:20:52.470403 master-0 kubenswrapper[4430]: I1203 14:20:52.468114 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/0.log" Dec 03 14:20:52.482208 master-0 kubenswrapper[4430]: I1203 14:20:52.482154 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb" exitCode=0 Dec 03 14:20:52.482382 master-0 kubenswrapper[4430]: I1203 14:20:52.482259 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb"} Dec 03 14:20:52.483153 master-0 kubenswrapper[4430]: I1203 14:20:52.483127 4430 scope.go:117] "RemoveContainer" containerID="1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb" Dec 03 14:20:52.483213 master-0 kubenswrapper[4430]: I1203 14:20:52.483190 4430 scope.go:117] "RemoveContainer" containerID="feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769" Dec 03 14:20:52.489347 master-0 kubenswrapper[4430]: E1203 14:20:52.489271 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:20:52.489448 master-0 kubenswrapper[4430]: E1203 14:20:52.489366 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:20:52.489448 master-0 kubenswrapper[4430]: I1203 14:20:52.489410 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae"} Dec 03 14:20:52.489538 master-0 kubenswrapper[4430]: I1203 14:20:52.489378 4430 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae" exitCode=0 Dec 03 14:20:52.491161 master-0 kubenswrapper[4430]: E1203 14:20:52.491093 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:20:52.491331 master-0 kubenswrapper[4430]: E1203 14:20:52.491278 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:20:52.492156 master-0 kubenswrapper[4430]: I1203 14:20:52.492118 4430 scope.go:117] "RemoveContainer" containerID="d60ff94b0488538faf66e4166cf33ce56c841715d1cdb0df2e7ec059f70cc2ae" Dec 03 14:20:52.495023 master-0 kubenswrapper[4430]: E1203 14:20:52.494913 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Dec 03 14:20:52.495113 master-0 kubenswrapper[4430]: E1203 14:20:52.495006 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi"] Dec 03 14:20:52.495113 master-0 kubenswrapper[4430]: E1203 14:20:52.495081 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" containerName="prometheus" Dec 03 14:20:52.495194 master-0 kubenswrapper[4430]: E1203 14:20:52.495078 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1 is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" containerName="prometheus" Dec 03 14:20:52.499341 master-0 kubenswrapper[4430]: I1203 14:20:52.499281 4430 generic.go:334] "Generic (PLEG): container finished" podID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" containerID="030f7dd0dec22bf96156b22c215fe1cbc7bc1825867377f5fd37e9f774beb6ec" exitCode=0 Dec 03 14:20:52.499421 master-0 kubenswrapper[4430]: I1203 14:20:52.499382 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerDied","Data":"030f7dd0dec22bf96156b22c215fe1cbc7bc1825867377f5fd37e9f774beb6ec"} Dec 03 14:20:52.500145 master-0 kubenswrapper[4430]: I1203 14:20:52.500118 4430 scope.go:117] "RemoveContainer" containerID="030f7dd0dec22bf96156b22c215fe1cbc7bc1825867377f5fd37e9f774beb6ec" Dec 03 14:20:52.503340 master-0 kubenswrapper[4430]: I1203 14:20:52.503301 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovnkube-controller/1.log" Dec 03 14:20:52.514124 master-0 kubenswrapper[4430]: I1203 14:20:52.514096 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovn-acl-logging/1.log" Dec 03 14:20:52.515170 master-0 kubenswrapper[4430]: I1203 14:20:52.515139 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="437f7ec09cdfbaaf0f2d8c750d87de41470c054124a451b74aa361e784f32913" exitCode=0 Dec 03 14:20:52.515286 master-0 kubenswrapper[4430]: I1203 14:20:52.515230 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"437f7ec09cdfbaaf0f2d8c750d87de41470c054124a451b74aa361e784f32913"} Dec 03 14:20:52.516211 master-0 kubenswrapper[4430]: I1203 14:20:52.516156 4430 scope.go:117] "RemoveContainer" containerID="91af6ef7f9da44b2fd5666c3d41bbac57f3b1b2e9b53696653ba5f67acb275c2" Dec 03 14:20:52.516211 master-0 kubenswrapper[4430]: I1203 14:20:52.516192 4430 scope.go:117] "RemoveContainer" containerID="437f7ec09cdfbaaf0f2d8c750d87de41470c054124a451b74aa361e784f32913" Dec 03 14:20:52.516211 master-0 kubenswrapper[4430]: I1203 14:20:52.516209 4430 scope.go:117] "RemoveContainer" containerID="4f4048bb8a9818d0d6d08b3a0c4266128e22b30fad60e11c85437aeb1c539071" Dec 03 14:20:52.516388 master-0 kubenswrapper[4430]: I1203 14:20:52.516223 4430 scope.go:117] "RemoveContainer" containerID="b52e70a2fd68cf19fc245924194323a013951ec6f99b3a5e99b3a9580cd13ee0" Dec 03 14:20:52.516388 master-0 kubenswrapper[4430]: I1203 14:20:52.516236 4430 scope.go:117] "RemoveContainer" containerID="7df03249c6c36a8bedc8b2855a0ac7732b8b760292170d049984dc2323c4c36c" Dec 03 14:20:52.516388 master-0 kubenswrapper[4430]: I1203 14:20:52.516249 4430 scope.go:117] "RemoveContainer" containerID="f6fc0b8da5448f87611e04db90aca266d162bc2637b73ede6c1ca2a74107e8f9" Dec 03 14:20:52.516388 master-0 kubenswrapper[4430]: I1203 14:20:52.516264 4430 scope.go:117] "RemoveContainer" containerID="876e3a6d236e0be4c450dc348094b65ed7c200ebe5e36f5297e4821af364dfde" Dec 03 14:20:52.520869 master-0 kubenswrapper[4430]: I1203 14:20:52.520819 4430 generic.go:334] "Generic (PLEG): container finished" podID="98392f8e-0285-4bc3-95a9-d29033639ca3" containerID="1813e95aa9179f1c5292e5b2348000ccebe08276790c97d9ea0ab42ce9345f9c" exitCode=0 Dec 03 14:20:52.521076 master-0 kubenswrapper[4430]: I1203 14:20:52.520873 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerDied","Data":"1813e95aa9179f1c5292e5b2348000ccebe08276790c97d9ea0ab42ce9345f9c"} Dec 03 14:20:52.521642 master-0 kubenswrapper[4430]: I1203 14:20:52.521611 4430 scope.go:117] "RemoveContainer" containerID="f702f47197a7be997d18ff5a17914c0f7a106fc6c0ef420b592e9470e20aa846" Dec 03 14:20:52.521723 master-0 kubenswrapper[4430]: I1203 14:20:52.521647 4430 scope.go:117] "RemoveContainer" containerID="1813e95aa9179f1c5292e5b2348000ccebe08276790c97d9ea0ab42ce9345f9c" Dec 03 14:20:52.603425 master-0 kubenswrapper[4430]: I1203 14:20:52.603361 4430 patch_prober.go:28] interesting pod/monitoring-plugin-547cc9cc49-kqs4k container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.88:9443/health\": dial tcp 10.128.0.88:9443: connect: connection refused" start-of-body= Dec 03 14:20:52.603538 master-0 kubenswrapper[4430]: I1203 14:20:52.603473 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.128.0.88:9443/health\": dial tcp 10.128.0.88:9443: connect: connection refused" Dec 03 14:20:52.606959 master-0 kubenswrapper[4430]: I1203 14:20:52.606907 4430 patch_prober.go:28] interesting pod/dns-default-5m4f8 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" start-of-body= Dec 03 14:20:52.607027 master-0 kubenswrapper[4430]: I1203 14:20:52.606984 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" Dec 03 14:20:52.627127 master-0 kubenswrapper[4430]: I1203 14:20:52.627058 4430 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Dec 03 14:20:52.627301 master-0 kubenswrapper[4430]: I1203 14:20:52.627161 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-master-0" podUID="4dd8b778e190b1975a0a8fad534da6dd" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" Dec 03 14:20:52.627301 master-0 kubenswrapper[4430]: I1203 14:20:52.627078 4430 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Dec 03 14:20:52.627301 master-0 kubenswrapper[4430]: I1203 14:20:52.627263 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="4dd8b778e190b1975a0a8fad534da6dd" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Dec 03 14:20:52.703990 master-0 kubenswrapper[4430]: I1203 14:20:52.703950 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]controller ok Dec 03 14:20:52.703990 master-0 kubenswrapper[4430]: [-]backend-http failed: reason withheld Dec 03 14:20:52.703990 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:20:52.704183 master-0 kubenswrapper[4430]: I1203 14:20:52.704014 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:20:52.714319 master-0 kubenswrapper[4430]: I1203 14:20:52.714284 4430 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:20:52.714319 master-0 kubenswrapper[4430]: [+]has-synced ok Dec 03 14:20:52.714319 master-0 kubenswrapper[4430]: [-]process-running failed: reason withheld Dec 03 14:20:52.714319 master-0 kubenswrapper[4430]: healthz check failed Dec 03 14:20:52.714649 master-0 kubenswrapper[4430]: I1203 14:20:52.714357 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:20:52.780280 master-0 kubenswrapper[4430]: I1203 14:20:52.780197 4430 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Dec 03 14:20:52.780396 master-0 kubenswrapper[4430]: I1203 14:20:52.780318 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" Dec 03 14:20:52.780396 master-0 kubenswrapper[4430]: I1203 14:20:52.780213 4430 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Dec 03 14:20:52.780497 master-0 kubenswrapper[4430]: I1203 14:20:52.780434 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" Dec 03 14:20:52.831257 master-0 kubenswrapper[4430]: I1203 14:20:52.831184 4430 patch_prober.go:28] interesting pod/thanos-querier-cc996c4bd-j4hzr container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.128.0.85:9091/-/healthy\": dial tcp 10.128.0.85:9091: connect: connection refused" start-of-body= Dec 03 14:20:52.831257 master-0 kubenswrapper[4430]: I1203 14:20:52.831234 4430 patch_prober.go:28] interesting pod/thanos-querier-cc996c4bd-j4hzr container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.85:9091/-/ready\": dial tcp 10.128.0.85:9091: connect: connection refused" start-of-body= Dec 03 14:20:52.831558 master-0 kubenswrapper[4430]: I1203 14:20:52.831266 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.128.0.85:9091/-/ready\": dial tcp 10.128.0.85:9091: connect: connection refused" Dec 03 14:20:52.831558 master-0 kubenswrapper[4430]: I1203 14:20:52.831262 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.128.0.85:9091/-/healthy\": dial tcp 10.128.0.85:9091: connect: connection refused" Dec 03 14:20:53.239299 master-0 kubenswrapper[4430]: I1203 14:20:53.239224 4430 patch_prober.go:28] interesting pod/packageserver-7c64dd9d8b-49skr container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" start-of-body= Dec 03 14:20:53.239533 master-0 kubenswrapper[4430]: I1203 14:20:53.239300 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" Dec 03 14:20:53.239533 master-0 kubenswrapper[4430]: I1203 14:20:53.239338 4430 patch_prober.go:28] interesting pod/packageserver-7c64dd9d8b-49skr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" start-of-body= Dec 03 14:20:53.239533 master-0 kubenswrapper[4430]: I1203 14:20:53.239481 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.90:5443/healthz\": dial tcp 10.128.0.90:5443: connect: connection refused" Dec 03 14:20:53.338339 master-0 kubenswrapper[4430]: I1203 14:20:53.338268 4430 patch_prober.go:28] interesting pod/network-check-target-pcchm container/network-check-target-container namespace/openshift-network-diagnostics: Readiness probe status=failure output="Get \"http://10.128.0.4:8080/\": dial tcp 10.128.0.4:8080: connect: connection refused" start-of-body= Dec 03 14:20:53.338339 master-0 kubenswrapper[4430]: I1203 14:20:53.338351 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" containerName="network-check-target-container" probeResult="failure" output="Get \"http://10.128.0.4:8080/\": dial tcp 10.128.0.4:8080: connect: connection refused" Dec 03 14:20:53.360845 master-0 kubenswrapper[4430]: I1203 14:20:53.360780 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:20:53.360951 master-0 kubenswrapper[4430]: I1203 14:20:53.360879 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:20:53.371587 master-0 kubenswrapper[4430]: E1203 14:20:53.371501 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.371797 master-0 kubenswrapper[4430]: E1203 14:20:53.371510 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.372065 master-0 kubenswrapper[4430]: E1203 14:20:53.372016 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.372163 master-0 kubenswrapper[4430]: E1203 14:20:53.372093 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.372473 master-0 kubenswrapper[4430]: E1203 14:20:53.372400 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.372572 master-0 kubenswrapper[4430]: E1203 14:20:53.372457 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerName="registry-server" Dec 03 14:20:53.372572 master-0 kubenswrapper[4430]: E1203 14:20:53.372490 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.372738 master-0 kubenswrapper[4430]: E1203 14:20:53.372579 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503 is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerName="registry-server" Dec 03 14:20:53.394617 master-0 kubenswrapper[4430]: I1203 14:20:53.394546 4430 patch_prober.go:28] interesting pod/catalog-operator-7cf5cf757f-zgm6l container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.58:8443/healthz\": dial tcp 10.128.0.58:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.394807 master-0 kubenswrapper[4430]: I1203 14:20:53.394605 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.58:8443/healthz\": dial tcp 10.128.0.58:8443: connect: connection refused" Dec 03 14:20:53.394946 master-0 kubenswrapper[4430]: I1203 14:20:53.394877 4430 patch_prober.go:28] interesting pod/catalog-operator-7cf5cf757f-zgm6l container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.58:8443/healthz\": dial tcp 10.128.0.58:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.395047 master-0 kubenswrapper[4430]: I1203 14:20:53.394986 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.58:8443/healthz\": dial tcp 10.128.0.58:8443: connect: connection refused" Dec 03 14:20:53.443983 master-0 kubenswrapper[4430]: I1203 14:20:53.443931 4430 patch_prober.go:28] interesting pod/oauth-openshift-747bdb58b5-mn76f container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.128.0.94:6443/healthz\": dial tcp 10.128.0.94:6443: connect: connection refused" start-of-body= Dec 03 14:20:53.444375 master-0 kubenswrapper[4430]: I1203 14:20:53.443936 4430 patch_prober.go:28] interesting pod/authentication-operator-7479ffdf48-hpdzl container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.444375 master-0 kubenswrapper[4430]: I1203 14:20:53.444040 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Dec 03 14:20:53.444375 master-0 kubenswrapper[4430]: I1203 14:20:53.443987 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.94:6443/healthz\": dial tcp 10.128.0.94:6443: connect: connection refused" Dec 03 14:20:53.444375 master-0 kubenswrapper[4430]: I1203 14:20:53.444278 4430 patch_prober.go:28] interesting pod/oauth-openshift-747bdb58b5-mn76f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.94:6443/healthz\": dial tcp 10.128.0.94:6443: connect: connection refused" start-of-body= Dec 03 14:20:53.444542 master-0 kubenswrapper[4430]: I1203 14:20:53.444393 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.94:6443/healthz\": dial tcp 10.128.0.94:6443: connect: connection refused" Dec 03 14:20:53.461010 master-0 kubenswrapper[4430]: I1203 14:20:53.460898 4430 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.461010 master-0 kubenswrapper[4430]: I1203 14:20:53.460942 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:20:53.461221 master-0 kubenswrapper[4430]: I1203 14:20:53.461179 4430 patch_prober.go:28] interesting pod/openshift-config-operator-68c95b6cf5-fmdmz container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.461274 master-0 kubenswrapper[4430]: I1203 14:20:53.461236 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.68:8443/healthz\": dial tcp 10.128.0.68:8443: connect: connection refused" Dec 03 14:20:53.473302 master-0 kubenswrapper[4430]: E1203 14:20:53.473198 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.473302 master-0 kubenswrapper[4430]: E1203 14:20:53.473251 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.473995 master-0 kubenswrapper[4430]: E1203 14:20:53.473917 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.474129 master-0 kubenswrapper[4430]: E1203 14:20:53.473928 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.474456 master-0 kubenswrapper[4430]: E1203 14:20:53.474370 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.474456 master-0 kubenswrapper[4430]: E1203 14:20:53.474396 4430 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" cmd=["grpc_health_probe","-addr=:50051"] Dec 03 14:20:53.474599 master-0 kubenswrapper[4430]: E1203 14:20:53.474451 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerName="registry-server" Dec 03 14:20:53.474599 master-0 kubenswrapper[4430]: E1203 14:20:53.474470 4430 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerName="registry-server" Dec 03 14:20:53.497779 master-0 kubenswrapper[4430]: I1203 14:20:53.497718 4430 patch_prober.go:28] interesting pod/etcd-operator-7978bf889c-n64v4 container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Dec 03 14:20:53.497779 master-0 kubenswrapper[4430]: I1203 14:20:53.497768 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: I1203 14:20:53.521582 4430 patch_prober.go:28] interesting pod/apiserver-6985f84b49-v9vlg container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]log ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]etcd excluded: ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]etcd-readiness excluded: ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]informer-sync ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/max-in-flight-filter ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-startinformers ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: [-]shutdown failed: reason withheld Dec 03 14:20:53.521696 master-0 kubenswrapper[4430]: readyz check failed Dec 03 14:20:53.523305 master-0 kubenswrapper[4430]: I1203 14:20:53.521723 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:20:53.533909 master-0 kubenswrapper[4430]: I1203 14:20:53.533844 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7dcc7f9bd6-68wml_8eee1d96-2f58-41a6-ae51-c158b29fc813/kube-state-metrics/2.log" Dec 03 14:20:53.534042 master-0 kubenswrapper[4430]: I1203 14:20:53.533915 4430 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="ab76a9a8e06025c1dec72e891ccadecf9327b11d180768dbf27a99508a221390" exitCode=0 Dec 03 14:20:53.534042 master-0 kubenswrapper[4430]: I1203 14:20:53.533944 4430 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="32f33a8b3c820eff796ad6f9051a5bf68fd94da78a37ae44f9dab8ddbd2cbf58" exitCode=0 Dec 03 14:20:53.534042 master-0 kubenswrapper[4430]: I1203 14:20:53.533959 4430 generic.go:334] "Generic (PLEG): container finished" podID="8eee1d96-2f58-41a6-ae51-c158b29fc813" containerID="c27616ee97a233440b469b407debbcfb798dea5820539899850ae5f5e5b89175" exitCode=2 Dec 03 14:20:53.534305 master-0 kubenswrapper[4430]: I1203 14:20:53.534024 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"ab76a9a8e06025c1dec72e891ccadecf9327b11d180768dbf27a99508a221390"} Dec 03 14:20:53.534676 master-0 kubenswrapper[4430]: I1203 14:20:53.534596 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"32f33a8b3c820eff796ad6f9051a5bf68fd94da78a37ae44f9dab8ddbd2cbf58"} Dec 03 14:20:53.534790 master-0 kubenswrapper[4430]: I1203 14:20:53.534730 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerDied","Data":"c27616ee97a233440b469b407debbcfb798dea5820539899850ae5f5e5b89175"} Dec 03 14:20:53.534956 master-0 kubenswrapper[4430]: I1203 14:20:53.534914 4430 scope.go:117] "RemoveContainer" containerID="c27616ee97a233440b469b407debbcfb798dea5820539899850ae5f5e5b89175" Dec 03 14:20:53.535030 master-0 kubenswrapper[4430]: I1203 14:20:53.534962 4430 scope.go:117] "RemoveContainer" containerID="32f33a8b3c820eff796ad6f9051a5bf68fd94da78a37ae44f9dab8ddbd2cbf58" Dec 03 14:20:53.535030 master-0 kubenswrapper[4430]: I1203 14:20:53.534984 4430 scope.go:117] "RemoveContainer" containerID="ab76a9a8e06025c1dec72e891ccadecf9327b11d180768dbf27a99508a221390" Dec 03 14:20:53.537123 master-0 kubenswrapper[4430]: I1203 14:20:53.537067 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:20:53.537225 master-0 kubenswrapper[4430]: I1203 14:20:53.537126 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:20:53.537735 master-0 kubenswrapper[4430]: I1203 14:20:53.537687 4430 generic.go:334] "Generic (PLEG): container finished" podID="b340553b-d483-4839-8328-518f27770832" containerID="268a80f5afa3735e3b2ff1f7c9d91828f76700e580dcfe3a093ed071f55a17f1" exitCode=0 Dec 03 14:20:53.537735 master-0 kubenswrapper[4430]: I1203 14:20:53.537724 4430 generic.go:334] "Generic (PLEG): container finished" podID="b340553b-d483-4839-8328-518f27770832" containerID="4a52998d3e2c831cd3e0252ad2b4031d2564fefaca6ffb1a4074fbc3cf15e530" exitCode=0 Dec 03 14:20:53.538173 master-0 kubenswrapper[4430]: I1203 14:20:53.537760 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerDied","Data":"268a80f5afa3735e3b2ff1f7c9d91828f76700e580dcfe3a093ed071f55a17f1"} Dec 03 14:20:53.538173 master-0 kubenswrapper[4430]: I1203 14:20:53.537838 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerDied","Data":"4a52998d3e2c831cd3e0252ad2b4031d2564fefaca6ffb1a4074fbc3cf15e530"} Dec 03 14:20:53.540758 master-0 kubenswrapper[4430]: I1203 14:20:53.537696 4430 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:20:53.540918 master-0 kubenswrapper[4430]: I1203 14:20:53.540834 4430 scope.go:117] "RemoveContainer" containerID="4a52998d3e2c831cd3e0252ad2b4031d2564fefaca6ffb1a4074fbc3cf15e530" Dec 03 14:20:53.540995 master-0 kubenswrapper[4430]: I1203 14:20:53.540961 4430 scope.go:117] "RemoveContainer" containerID="268a80f5afa3735e3b2ff1f7c9d91828f76700e580dcfe3a093ed071f55a17f1" Dec 03 14:20:53.541255 master-0 kubenswrapper[4430]: I1203 14:20:53.540829 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:20:53.561461 master-0 kubenswrapper[4430]: I1203 14:20:53.561330 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerDied","Data":"fcca5b73de79e0a5e92be5518e4eac372c0dbac239f33776e14037dde54264ca"} Dec 03 14:20:53.562272 master-0 kubenswrapper[4430]: I1203 14:20:53.561253 4430 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="fcca5b73de79e0a5e92be5518e4eac372c0dbac239f33776e14037dde54264ca" exitCode=0 Dec 03 14:20:53.562430 master-0 kubenswrapper[4430]: I1203 14:20:53.562378 4430 generic.go:334] "Generic (PLEG): container finished" podID="bcc78129-4a81-410e-9a42-b12043b5a75a" containerID="ac19b8941583f505bca7f19ba1737738de3c95824b818d9b1b242be3535ba095" exitCode=0 Dec 03 14:20:53.562557 master-0 kubenswrapper[4430]: I1203 14:20:53.562408 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerDied","Data":"ac19b8941583f505bca7f19ba1737738de3c95824b818d9b1b242be3535ba095"} Dec 03 14:20:53.564251 master-0 kubenswrapper[4430]: I1203 14:20:53.564198 4430 scope.go:117] "RemoveContainer" containerID="ac19b8941583f505bca7f19ba1737738de3c95824b818d9b1b242be3535ba095" Dec 03 14:20:53.564372 master-0 kubenswrapper[4430]: I1203 14:20:53.564288 4430 scope.go:117] "RemoveContainer" containerID="fcca5b73de79e0a5e92be5518e4eac372c0dbac239f33776e14037dde54264ca" Dec 03 14:20:53.566583 master-0 kubenswrapper[4430]: I1203 14:20:53.566531 4430 generic.go:334] "Generic (PLEG): container finished" podID="d7d6a05e-beee-40e9-b376-5c22e285b27a" containerID="4f2dfbfa8ca94b5824611cefb87c0e9841e76fbc58d3e7950aee65cdd550fb55" exitCode=0 Dec 03 14:20:53.566726 master-0 kubenswrapper[4430]: I1203 14:20:53.566670 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerDied","Data":"4f2dfbfa8ca94b5824611cefb87c0e9841e76fbc58d3e7950aee65cdd550fb55"} Dec 03 14:20:53.567226 master-0 kubenswrapper[4430]: I1203 14:20:53.567183 4430 scope.go:117] "RemoveContainer" containerID="4f2dfbfa8ca94b5824611cefb87c0e9841e76fbc58d3e7950aee65cdd550fb55" Dec 03 14:20:53.573171 master-0 kubenswrapper[4430]: I1203 14:20:53.573116 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-764cbf5554-kftwv_829d285f-d532-45e4-b1ec-54adbc21b9f9/telemeter-client/0.log" Dec 03 14:20:53.573293 master-0 kubenswrapper[4430]: I1203 14:20:53.573184 4430 generic.go:334] "Generic (PLEG): container finished" podID="829d285f-d532-45e4-b1ec-54adbc21b9f9" containerID="5f404d464133aa8363203704d73365d24b12404b65044116fabfafd44fab495c" exitCode=0 Dec 03 14:20:53.573293 master-0 kubenswrapper[4430]: I1203 14:20:53.573217 4430 generic.go:334] "Generic (PLEG): container finished" podID="829d285f-d532-45e4-b1ec-54adbc21b9f9" containerID="ac7f4cd2def2bb496dc5f5aa1e8f39d0c213c9f9d0e8923d0950adbd07e9c37b" exitCode=0 Dec 03 14:20:53.573293 master-0 kubenswrapper[4430]: I1203 14:20:53.573230 4430 generic.go:334] "Generic (PLEG): container finished" podID="829d285f-d532-45e4-b1ec-54adbc21b9f9" containerID="ae814fc15ad121d6706b68a31e9d8c53b23ad4825832da984dbf729ef7765a86" exitCode=2 Dec 03 14:20:53.573293 master-0 kubenswrapper[4430]: I1203 14:20:53.573279 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerDied","Data":"5f404d464133aa8363203704d73365d24b12404b65044116fabfafd44fab495c"} Dec 03 14:20:53.573293 master-0 kubenswrapper[4430]: I1203 14:20:53.573301 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerDied","Data":"ac7f4cd2def2bb496dc5f5aa1e8f39d0c213c9f9d0e8923d0950adbd07e9c37b"} Dec 03 14:20:53.573637 master-0 kubenswrapper[4430]: I1203 14:20:53.573314 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerDied","Data":"ae814fc15ad121d6706b68a31e9d8c53b23ad4825832da984dbf729ef7765a86"} Dec 03 14:20:53.573700 master-0 kubenswrapper[4430]: I1203 14:20:53.573678 4430 scope.go:117] "RemoveContainer" containerID="ae814fc15ad121d6706b68a31e9d8c53b23ad4825832da984dbf729ef7765a86" Dec 03 14:20:53.573700 master-0 kubenswrapper[4430]: I1203 14:20:53.573697 4430 scope.go:117] "RemoveContainer" containerID="ac7f4cd2def2bb496dc5f5aa1e8f39d0c213c9f9d0e8923d0950adbd07e9c37b" Dec 03 14:20:53.573898 master-0 kubenswrapper[4430]: I1203 14:20:53.573710 4430 scope.go:117] "RemoveContainer" containerID="5f404d464133aa8363203704d73365d24b12404b65044116fabfafd44fab495c" Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579660 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="c44fccacafcc89a7f41f51bda645d91507bff12a5ee3a2f34749020e54160bbe" exitCode=0 Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579702 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="08c73268b9592ac8d03df102a14fe7d51af1e19b4679e409ed49670e07faf1b0" exitCode=0 Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579712 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="459ea4c37a38e3d087f3b8db48d49da05ce0bc01a55b7d42b544294505774666" exitCode=0 Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579720 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="0d8d53a093e5b4a209c70f9f168898ec080129a115dd5a593d75037b55b3d878" exitCode=0 Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579727 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="4f0c760191e6deb4a438d22186cf7870bb999b9485974340a2385e20d2dae583" exitCode=0 Dec 03 14:20:53.579724 master-0 kubenswrapper[4430]: I1203 14:20:53.579739 4430 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" exitCode=0 Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.579789 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"c44fccacafcc89a7f41f51bda645d91507bff12a5ee3a2f34749020e54160bbe"} Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.579963 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"08c73268b9592ac8d03df102a14fe7d51af1e19b4679e409ed49670e07faf1b0"} Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.579990 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"459ea4c37a38e3d087f3b8db48d49da05ce0bc01a55b7d42b544294505774666"} Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.580011 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"0d8d53a093e5b4a209c70f9f168898ec080129a115dd5a593d75037b55b3d878"} Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.580033 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"4f0c760191e6deb4a438d22186cf7870bb999b9485974340a2385e20d2dae583"} Dec 03 14:20:53.580583 master-0 kubenswrapper[4430]: I1203 14:20:53.580053 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1"} Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581403 4430 scope.go:117] "RemoveContainer" containerID="7592a0e8aeee1aaea4a50777cf9ac1370ebb613e8760432518bef2ae164d22d1" Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581497 4430 scope.go:117] "RemoveContainer" containerID="4f0c760191e6deb4a438d22186cf7870bb999b9485974340a2385e20d2dae583" Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581517 4430 scope.go:117] "RemoveContainer" containerID="0d8d53a093e5b4a209c70f9f168898ec080129a115dd5a593d75037b55b3d878" Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581533 4430 scope.go:117] "RemoveContainer" containerID="459ea4c37a38e3d087f3b8db48d49da05ce0bc01a55b7d42b544294505774666" Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581549 4430 scope.go:117] "RemoveContainer" containerID="08c73268b9592ac8d03df102a14fe7d51af1e19b4679e409ed49670e07faf1b0" Dec 03 14:20:53.581555 master-0 kubenswrapper[4430]: I1203 14:20:53.581567 4430 scope.go:117] "RemoveContainer" containerID="c44fccacafcc89a7f41f51bda645d91507bff12a5ee3a2f34749020e54160bbe" Dec 03 14:20:53.585426 master-0 kubenswrapper[4430]: I1203 14:20:53.585354 4430 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="7283e9c3ac39e31379bde2bcb4f5c88ca8ac8b17f2a1ee4bbb4c0394215cba1e" exitCode=0 Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.588108 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-7c4697b5f5-9f69p_adbcce01-7282-4a75-843a-9623060346f0/openshift-controller-manager-operator/3.log" Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.588148 4430 generic.go:334] "Generic (PLEG): container finished" podID="adbcce01-7282-4a75-843a-9623060346f0" containerID="663e8a37fd419b7f79754aa24f5933c92f81a1598e294f4a7ce88dc057a79131" exitCode=1 Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.591628 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/config-sync-controllers/2.log" Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.592179 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq_6b681889-eb2c-41fb-a1dc-69b99227b45b/cluster-cloud-controller-manager/2.log" Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.592211 4430 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="d523e57877f21ef3a75f49049c5491db001e1951e8c205f8a24f1c3ad8c18bfc" exitCode=0 Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.592226 4430 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="8042d58d3159be0fcc39c083d9468beb138a750ed338d5aa4389b51a68544c23" exitCode=0 Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.592235 4430 generic.go:334] "Generic (PLEG): container finished" podID="6b681889-eb2c-41fb-a1dc-69b99227b45b" containerID="6294fba576b1de2ecb3035eff143115b1a07a2fa711867db163f33fa80b48bf3" exitCode=0 Dec 03 14:20:53.596126 master-0 kubenswrapper[4430]: I1203 14:20:53.594143 4430 generic.go:334] "Generic (PLEG): container finished" podID="42c95e54-b4ba-4b19-a97c-abcec840ac5d" containerID="bbead94692b339bd07c2e48969b5e8e5d7bc96f40b82a72fbab8051c0835433b" exitCode=0 Dec 03 14:20:53.597346 master-0 kubenswrapper[4430]: I1203 14:20:53.597264 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerDied","Data":"7283e9c3ac39e31379bde2bcb4f5c88ca8ac8b17f2a1ee4bbb4c0394215cba1e"} Dec 03 14:20:53.597465 master-0 kubenswrapper[4430]: I1203 14:20:53.597360 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerDied","Data":"663e8a37fd419b7f79754aa24f5933c92f81a1598e294f4a7ce88dc057a79131"} Dec 03 14:20:53.597465 master-0 kubenswrapper[4430]: I1203 14:20:53.597392 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"d523e57877f21ef3a75f49049c5491db001e1951e8c205f8a24f1c3ad8c18bfc"} Dec 03 14:20:53.597465 master-0 kubenswrapper[4430]: I1203 14:20:53.597424 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"8042d58d3159be0fcc39c083d9468beb138a750ed338d5aa4389b51a68544c23"} Dec 03 14:20:53.597753 master-0 kubenswrapper[4430]: I1203 14:20:53.597603 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerDied","Data":"6294fba576b1de2ecb3035eff143115b1a07a2fa711867db163f33fa80b48bf3"} Dec 03 14:20:53.597753 master-0 kubenswrapper[4430]: I1203 14:20:53.597638 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerDied","Data":"bbead94692b339bd07c2e48969b5e8e5d7bc96f40b82a72fbab8051c0835433b"} Dec 03 14:20:53.598262 master-0 kubenswrapper[4430]: I1203 14:20:53.598217 4430 scope.go:117] "RemoveContainer" containerID="7283e9c3ac39e31379bde2bcb4f5c88ca8ac8b17f2a1ee4bbb4c0394215cba1e" Dec 03 14:20:53.598370 master-0 kubenswrapper[4430]: I1203 14:20:53.598297 4430 scope.go:117] "RemoveContainer" containerID="bbead94692b339bd07c2e48969b5e8e5d7bc96f40b82a72fbab8051c0835433b" Dec 03 14:20:53.598463 master-0 kubenswrapper[4430]: I1203 14:20:53.598384 4430 scope.go:117] "RemoveContainer" containerID="8042d58d3159be0fcc39c083d9468beb138a750ed338d5aa4389b51a68544c23" Dec 03 14:20:53.598463 master-0 kubenswrapper[4430]: I1203 14:20:53.598435 4430 scope.go:117] "RemoveContainer" containerID="d523e57877f21ef3a75f49049c5491db001e1951e8c205f8a24f1c3ad8c18bfc" Dec 03 14:20:53.598463 master-0 kubenswrapper[4430]: I1203 14:20:53.598453 4430 scope.go:117] "RemoveContainer" containerID="6294fba576b1de2ecb3035eff143115b1a07a2fa711867db163f33fa80b48bf3" Dec 03 14:20:53.598644 master-0 kubenswrapper[4430]: I1203 14:20:53.598483 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_8a00233b22d19df39b2e1c8ba133b3c2/kube-apiserver-cert-syncer/0.log" Dec 03 14:20:53.598644 master-0 kubenswrapper[4430]: I1203 14:20:53.598533 4430 scope.go:117] "RemoveContainer" containerID="663e8a37fd419b7f79754aa24f5933c92f81a1598e294f4a7ce88dc057a79131" Dec 03 14:20:53.600251 master-0 kubenswrapper[4430]: I1203 14:20:53.600076 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc" exitCode=0 Dec 03 14:20:53.600251 master-0 kubenswrapper[4430]: I1203 14:20:53.600107 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01" exitCode=0 Dec 03 14:20:53.600251 master-0 kubenswrapper[4430]: I1203 14:20:53.600117 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4" exitCode=2 Dec 03 14:20:53.600251 master-0 kubenswrapper[4430]: I1203 14:20:53.600217 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc"} Dec 03 14:20:53.600557 master-0 kubenswrapper[4430]: I1203 14:20:53.600330 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01"} Dec 03 14:20:53.600557 master-0 kubenswrapper[4430]: I1203 14:20:53.600366 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4"} Dec 03 14:20:53.601782 master-0 kubenswrapper[4430]: I1203 14:20:53.601746 4430 scope.go:117] "RemoveContainer" containerID="c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4" Dec 03 14:20:53.602505 master-0 kubenswrapper[4430]: I1203 14:20:53.602221 4430 scope.go:117] "RemoveContainer" containerID="b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01" Dec 03 14:20:53.602707 master-0 kubenswrapper[4430]: I1203 14:20:53.602259 4430 scope.go:117] "RemoveContainer" containerID="a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc" Dec 03 14:20:53.603467 master-0 kubenswrapper[4430]: I1203 14:20:53.603123 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-7486ff55f-wcnxg_e9f484c1-1564-49c7-a43d-bd8b971cea20/machine-api-operator/1.log" Dec 03 14:20:53.603767 master-0 kubenswrapper[4430]: I1203 14:20:53.603732 4430 generic.go:334] "Generic (PLEG): container finished" podID="e9f484c1-1564-49c7-a43d-bd8b971cea20" containerID="66b01e50689fc76d3d521545e6f121110eaac09350e8b3c349449c13b1cce1b9" exitCode=2 Dec 03 14:20:53.603847 master-0 kubenswrapper[4430]: I1203 14:20:53.603770 4430 generic.go:334] "Generic (PLEG): container finished" podID="e9f484c1-1564-49c7-a43d-bd8b971cea20" containerID="4a2b8028d7c2b5e304e7ce9eca67ce37ed97bbf7fef945eae54a284c26d8dd52" exitCode=0 Dec 03 14:20:53.603912 master-0 kubenswrapper[4430]: I1203 14:20:53.603831 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerDied","Data":"66b01e50689fc76d3d521545e6f121110eaac09350e8b3c349449c13b1cce1b9"} Dec 03 14:20:53.603912 master-0 kubenswrapper[4430]: I1203 14:20:53.603890 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerDied","Data":"4a2b8028d7c2b5e304e7ce9eca67ce37ed97bbf7fef945eae54a284c26d8dd52"} Dec 03 14:20:53.604492 master-0 kubenswrapper[4430]: I1203 14:20:53.604448 4430 scope.go:117] "RemoveContainer" containerID="4a2b8028d7c2b5e304e7ce9eca67ce37ed97bbf7fef945eae54a284c26d8dd52" Dec 03 14:20:53.604492 master-0 kubenswrapper[4430]: I1203 14:20:53.604493 4430 scope.go:117] "RemoveContainer" containerID="66b01e50689fc76d3d521545e6f121110eaac09350e8b3c349449c13b1cce1b9" Dec 03 14:20:53.606283 master-0 kubenswrapper[4430]: I1203 14:20:53.606239 4430 generic.go:334] "Generic (PLEG): container finished" podID="8c6fa89f-268c-477b-9f04-238d2305cc89" containerID="08ced1c3618fdc83fc2e72fb6738f0de722b94b0761b7be5638a318aed1b0c8a" exitCode=0 Dec 03 14:20:53.606420 master-0 kubenswrapper[4430]: I1203 14:20:53.606306 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerDied","Data":"08ced1c3618fdc83fc2e72fb6738f0de722b94b0761b7be5638a318aed1b0c8a"} Dec 03 14:20:53.606792 master-0 kubenswrapper[4430]: I1203 14:20:53.606755 4430 scope.go:117] "RemoveContainer" containerID="08ced1c3618fdc83fc2e72fb6738f0de722b94b0761b7be5638a318aed1b0c8a" Dec 03 14:20:53.611070 master-0 kubenswrapper[4430]: I1203 14:20:53.611021 4430 generic.go:334] "Generic (PLEG): container finished" podID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" containerID="82bfca5befe892227dcb2cc9f2c980331e3b5591d6fe13cb2f4b26d2be585fe1" exitCode=0 Dec 03 14:20:53.611070 master-0 kubenswrapper[4430]: I1203 14:20:53.611056 4430 generic.go:334] "Generic (PLEG): container finished" podID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" containerID="f430efa90aa1a14922254c020cd93a9e38056484cff7a2242fbc1eaaf67809b1" exitCode=0 Dec 03 14:20:53.611266 master-0 kubenswrapper[4430]: I1203 14:20:53.611128 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerDied","Data":"82bfca5befe892227dcb2cc9f2c980331e3b5591d6fe13cb2f4b26d2be585fe1"} Dec 03 14:20:53.611579 master-0 kubenswrapper[4430]: I1203 14:20:53.611524 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerDied","Data":"f430efa90aa1a14922254c020cd93a9e38056484cff7a2242fbc1eaaf67809b1"} Dec 03 14:20:53.615244 master-0 kubenswrapper[4430]: I1203 14:20:53.615180 4430 scope.go:117] "RemoveContainer" containerID="f430efa90aa1a14922254c020cd93a9e38056484cff7a2242fbc1eaaf67809b1" Dec 03 14:20:53.615244 master-0 kubenswrapper[4430]: I1203 14:20:53.615235 4430 scope.go:117] "RemoveContainer" containerID="82bfca5befe892227dcb2cc9f2c980331e3b5591d6fe13cb2f4b26d2be585fe1" Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616195 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="4ea3be14129d747d3319cc01eb1e20f1c667535f6d7f6f3f37fa56ddded71556" exitCode=0 Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616223 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="7478ef4361a9a8412f63ccae624bfe30f6d3fc0665bf96d1e52d3e33b8313db0" exitCode=0 Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616233 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="18c5a85efe83dfd9e427d45d6180c81438ff37c2286c08cbb7d19c4a3ea3360e" exitCode=0 Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616241 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="8b009fd8a9bad6827f986727dcda599e0b5809b316a6de90d75d38b79726932c" exitCode=0 Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616249 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="f6b0a5f597fc99a1dff83a421933bf55dc9808dd51bfe542c807a86b6e6d7922" exitCode=0 Dec 03 14:20:53.616278 master-0 kubenswrapper[4430]: I1203 14:20:53.616256 4430 generic.go:334] "Generic (PLEG): container finished" podID="8a12409a-0be3-4023-9df3-a0f091aac8dc" containerID="df7772326af1c4ac74c7b47c9e456bf0b8dba5b05ac47dd7020f9aef132452b5" exitCode=0 Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616310 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"4ea3be14129d747d3319cc01eb1e20f1c667535f6d7f6f3f37fa56ddded71556"} Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616472 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"7478ef4361a9a8412f63ccae624bfe30f6d3fc0665bf96d1e52d3e33b8313db0"} Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616519 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"18c5a85efe83dfd9e427d45d6180c81438ff37c2286c08cbb7d19c4a3ea3360e"} Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616557 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"8b009fd8a9bad6827f986727dcda599e0b5809b316a6de90d75d38b79726932c"} Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616585 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"f6b0a5f597fc99a1dff83a421933bf55dc9808dd51bfe542c807a86b6e6d7922"} Dec 03 14:20:53.616884 master-0 kubenswrapper[4430]: I1203 14:20:53.616623 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerDied","Data":"df7772326af1c4ac74c7b47c9e456bf0b8dba5b05ac47dd7020f9aef132452b5"} Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617752 4430 scope.go:117] "RemoveContainer" containerID="df7772326af1c4ac74c7b47c9e456bf0b8dba5b05ac47dd7020f9aef132452b5" Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617808 4430 scope.go:117] "RemoveContainer" containerID="f6b0a5f597fc99a1dff83a421933bf55dc9808dd51bfe542c807a86b6e6d7922" Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617820 4430 scope.go:117] "RemoveContainer" containerID="8b009fd8a9bad6827f986727dcda599e0b5809b316a6de90d75d38b79726932c" Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617830 4430 scope.go:117] "RemoveContainer" containerID="18c5a85efe83dfd9e427d45d6180c81438ff37c2286c08cbb7d19c4a3ea3360e" Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617839 4430 scope.go:117] "RemoveContainer" containerID="7478ef4361a9a8412f63ccae624bfe30f6d3fc0665bf96d1e52d3e33b8313db0" Dec 03 14:20:53.618236 master-0 kubenswrapper[4430]: I1203 14:20:53.617851 4430 scope.go:117] "RemoveContainer" containerID="4ea3be14129d747d3319cc01eb1e20f1c667535f6d7f6f3f37fa56ddded71556" Dec 03 14:20:53.626711 master-0 kubenswrapper[4430]: I1203 14:20:53.626596 4430 generic.go:334] "Generic (PLEG): container finished" podID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" containerID="6763556ae5d3c40350287a7a36e5f8ecd1ff5403f56b53c354aec326ec8d1a4c" exitCode=0 Dec 03 14:20:53.626711 master-0 kubenswrapper[4430]: I1203 14:20:53.626636 4430 generic.go:334] "Generic (PLEG): container finished" podID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" containerID="bae107df9b3a9e762d5264f84db57403126e7f9a8cc1809fa06f7e1b7657108c" exitCode=0 Dec 03 14:20:53.626711 master-0 kubenswrapper[4430]: I1203 14:20:53.626655 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerDied","Data":"6763556ae5d3c40350287a7a36e5f8ecd1ff5403f56b53c354aec326ec8d1a4c"} Dec 03 14:20:53.626869 master-0 kubenswrapper[4430]: I1203 14:20:53.626727 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerDied","Data":"bae107df9b3a9e762d5264f84db57403126e7f9a8cc1809fa06f7e1b7657108c"} Dec 03 14:20:53.627785 master-0 kubenswrapper[4430]: I1203 14:20:53.627424 4430 scope.go:117] "RemoveContainer" containerID="bae107df9b3a9e762d5264f84db57403126e7f9a8cc1809fa06f7e1b7657108c" Dec 03 14:20:53.627785 master-0 kubenswrapper[4430]: I1203 14:20:53.627494 4430 scope.go:117] "RemoveContainer" containerID="6763556ae5d3c40350287a7a36e5f8ecd1ff5403f56b53c354aec326ec8d1a4c" Dec 03 14:20:53.633508 master-0 kubenswrapper[4430]: I1203 14:20:53.630694 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-c8csx_da583723-b3ad-4a6f-b586-09b739bd7f8c/approver/4.log" Dec 03 14:20:53.633508 master-0 kubenswrapper[4430]: I1203 14:20:53.631542 4430 generic.go:334] "Generic (PLEG): container finished" podID="da583723-b3ad-4a6f-b586-09b739bd7f8c" containerID="43bf3109c8eaedbe3dd590a49c242b2c73b0d2a5937982df49275d827c77658e" exitCode=0 Dec 03 14:20:53.633508 master-0 kubenswrapper[4430]: I1203 14:20:53.631653 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerDied","Data":"43bf3109c8eaedbe3dd590a49c242b2c73b0d2a5937982df49275d827c77658e"} Dec 03 14:20:53.633508 master-0 kubenswrapper[4430]: I1203 14:20:53.632268 4430 scope.go:117] "RemoveContainer" containerID="43bf3109c8eaedbe3dd590a49c242b2c73b0d2a5937982df49275d827c77658e" Dec 03 14:20:53.633508 master-0 kubenswrapper[4430]: E1203 14:20:53.632549 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-c8csx_openshift-network-node-identity(da583723-b3ad-4a6f-b586-09b739bd7f8c)\"" pod="openshift-network-node-identity/network-node-identity-c8csx" podUID="da583723-b3ad-4a6f-b586-09b739bd7f8c" Dec 03 14:20:53.635029 master-0 kubenswrapper[4430]: I1203 14:20:53.634985 4430 generic.go:334] "Generic (PLEG): container finished" podID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerID="9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb" exitCode=0 Dec 03 14:20:53.635090 master-0 kubenswrapper[4430]: I1203 14:20:53.635064 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerDied","Data":"9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb"} Dec 03 14:20:53.635523 master-0 kubenswrapper[4430]: I1203 14:20:53.635481 4430 scope.go:117] "RemoveContainer" containerID="9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb" Dec 03 14:20:53.639192 master-0 kubenswrapper[4430]: I1203 14:20:53.639157 4430 generic.go:334] "Generic (PLEG): container finished" podID="98392f8e-0285-4bc3-95a9-d29033639ca3" containerID="f702f47197a7be997d18ff5a17914c0f7a106fc6c0ef420b592e9470e20aa846" exitCode=0 Dec 03 14:20:53.639302 master-0 kubenswrapper[4430]: I1203 14:20:53.639255 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerDied","Data":"f702f47197a7be997d18ff5a17914c0f7a106fc6c0ef420b592e9470e20aa846"} Dec 03 14:20:53.641111 master-0 kubenswrapper[4430]: I1203 14:20:53.641083 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-7c4dc67499-tjwg8_eefee934-ac6b-44e3-a6be-1ae62362ab4f/cloud-credential-operator/1.log" Dec 03 14:20:53.641505 master-0 kubenswrapper[4430]: I1203 14:20:53.641479 4430 generic.go:334] "Generic (PLEG): container finished" podID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" containerID="54b6140cbea417161375e860fb10d603f09badb0c109ed0bc2f7bc20067a5846" exitCode=1 Dec 03 14:20:53.641562 master-0 kubenswrapper[4430]: I1203 14:20:53.641539 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerDied","Data":"54b6140cbea417161375e860fb10d603f09badb0c109ed0bc2f7bc20067a5846"} Dec 03 14:20:53.643857 master-0 kubenswrapper[4430]: I1203 14:20:53.643809 4430 generic.go:334] "Generic (PLEG): container finished" podID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" containerID="5d83d522f828e748d7d032f3ff1936675a733a53ba0aa7f39e28915f3f2072b9" exitCode=0 Dec 03 14:20:53.643922 master-0 kubenswrapper[4430]: I1203 14:20:53.643828 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerDied","Data":"5d83d522f828e748d7d032f3ff1936675a733a53ba0aa7f39e28915f3f2072b9"} Dec 03 14:20:53.644554 master-0 kubenswrapper[4430]: I1203 14:20:53.644523 4430 scope.go:117] "RemoveContainer" containerID="5d83d522f828e748d7d032f3ff1936675a733a53ba0aa7f39e28915f3f2072b9" Dec 03 14:20:53.646663 master-0 kubenswrapper[4430]: I1203 14:20:53.646636 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-5f78c89466-bshxw_82bd0ae5-b35d-47c8-b693-b27a9a56476d/manager/3.log" Dec 03 14:20:53.647832 master-0 kubenswrapper[4430]: I1203 14:20:53.647800 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-5f78c89466-bshxw_82bd0ae5-b35d-47c8-b693-b27a9a56476d/manager/2.log" Dec 03 14:20:53.647887 master-0 kubenswrapper[4430]: I1203 14:20:53.647838 4430 generic.go:334] "Generic (PLEG): container finished" podID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" containerID="b6bbf72a84092f50bd40fe6e46d6311742d89a9cef83ffde53267283ea90b6f9" exitCode=1 Dec 03 14:20:53.647887 master-0 kubenswrapper[4430]: I1203 14:20:53.647850 4430 generic.go:334] "Generic (PLEG): container finished" podID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" containerID="a57f3a556881f57b28aa81a1b7f6bbd44306b4874d2df36ed0f0bc67eb848026" exitCode=0 Dec 03 14:20:53.647959 master-0 kubenswrapper[4430]: I1203 14:20:53.647896 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerDied","Data":"b6bbf72a84092f50bd40fe6e46d6311742d89a9cef83ffde53267283ea90b6f9"} Dec 03 14:20:53.647959 master-0 kubenswrapper[4430]: I1203 14:20:53.647918 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerDied","Data":"a57f3a556881f57b28aa81a1b7f6bbd44306b4874d2df36ed0f0bc67eb848026"} Dec 03 14:20:53.648232 master-0 kubenswrapper[4430]: I1203 14:20:53.648191 4430 scope.go:117] "RemoveContainer" containerID="b6bbf72a84092f50bd40fe6e46d6311742d89a9cef83ffde53267283ea90b6f9" Dec 03 14:20:53.648232 master-0 kubenswrapper[4430]: I1203 14:20:53.648211 4430 scope.go:117] "RemoveContainer" containerID="a57f3a556881f57b28aa81a1b7f6bbd44306b4874d2df36ed0f0bc67eb848026" Dec 03 14:20:53.649737 master-0 kubenswrapper[4430]: I1203 14:20:53.649704 4430 generic.go:334] "Generic (PLEG): container finished" podID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" containerID="ee5fbbdb28f502e43e6a512cbbf957778f369fe882b43acd145d347e95dbf4df" exitCode=0 Dec 03 14:20:53.649892 master-0 kubenswrapper[4430]: I1203 14:20:53.649752 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerDied","Data":"ee5fbbdb28f502e43e6a512cbbf957778f369fe882b43acd145d347e95dbf4df"} Dec 03 14:20:53.650072 master-0 kubenswrapper[4430]: I1203 14:20:53.650054 4430 scope.go:117] "RemoveContainer" containerID="ee5fbbdb28f502e43e6a512cbbf957778f369fe882b43acd145d347e95dbf4df" Dec 03 14:20:53.652651 master-0 kubenswrapper[4430]: I1203 14:20:53.652628 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/5.log" Dec 03 14:20:53.653223 master-0 kubenswrapper[4430]: I1203 14:20:53.653193 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-86897dd478-qqwh7_63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4/snapshot-controller/4.log" Dec 03 14:20:53.653286 master-0 kubenswrapper[4430]: I1203 14:20:53.653271 4430 generic.go:334] "Generic (PLEG): container finished" podID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" containerID="e2fa399cca28278caf8b044e9c78793296cf9c5e629fca1475d458bec63e78db" exitCode=2 Dec 03 14:20:53.653418 master-0 kubenswrapper[4430]: I1203 14:20:53.653339 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerDied","Data":"e2fa399cca28278caf8b044e9c78793296cf9c5e629fca1475d458bec63e78db"} Dec 03 14:20:53.654425 master-0 kubenswrapper[4430]: I1203 14:20:53.654388 4430 scope.go:117] "RemoveContainer" containerID="e2fa399cca28278caf8b044e9c78793296cf9c5e629fca1475d458bec63e78db" Dec 03 14:20:53.654752 master-0 kubenswrapper[4430]: E1203 14:20:53.654721 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-86897dd478-qqwh7_openshift-cluster-storage-operator(63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:20:53.656980 master-0 kubenswrapper[4430]: I1203 14:20:53.656945 4430 generic.go:334] "Generic (PLEG): container finished" podID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" containerID="b0931e25643c6197e39f0aa7ba9cfa54a15bcc3be73426bc3ca04a17ebfb56fb" exitCode=0 Dec 03 14:20:53.657093 master-0 kubenswrapper[4430]: I1203 14:20:53.657061 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerDied","Data":"b0931e25643c6197e39f0aa7ba9cfa54a15bcc3be73426bc3ca04a17ebfb56fb"} Dec 03 14:20:53.657679 master-0 kubenswrapper[4430]: I1203 14:20:53.657655 4430 scope.go:117] "RemoveContainer" containerID="b0931e25643c6197e39f0aa7ba9cfa54a15bcc3be73426bc3ca04a17ebfb56fb" Dec 03 14:20:53.660365 master-0 kubenswrapper[4430]: I1203 14:20:53.660326 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-b62gf_b71ac8a5-987d-4eba-8bc0-a091f0a0de16/node-exporter/3.log" Dec 03 14:20:53.660922 master-0 kubenswrapper[4430]: I1203 14:20:53.660877 4430 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="83069a1a0084abec38cf08ffc3864c6dc387ece2da2dbaeed14a4f8878ec03d9" exitCode=0 Dec 03 14:20:53.660922 master-0 kubenswrapper[4430]: I1203 14:20:53.660898 4430 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="0a57a8bdd5b6859edb5ca8bb103c32c2e252a56328e53f02c6630b3ca1df16e3" exitCode=143 Dec 03 14:20:53.660922 master-0 kubenswrapper[4430]: I1203 14:20:53.660906 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"83069a1a0084abec38cf08ffc3864c6dc387ece2da2dbaeed14a4f8878ec03d9"} Dec 03 14:20:53.661741 master-0 kubenswrapper[4430]: I1203 14:20:53.660942 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"0a57a8bdd5b6859edb5ca8bb103c32c2e252a56328e53f02c6630b3ca1df16e3"} Dec 03 14:20:53.661741 master-0 kubenswrapper[4430]: I1203 14:20:53.661485 4430 scope.go:117] "RemoveContainer" containerID="0a57a8bdd5b6859edb5ca8bb103c32c2e252a56328e53f02c6630b3ca1df16e3" Dec 03 14:20:53.661741 master-0 kubenswrapper[4430]: I1203 14:20:53.661511 4430 scope.go:117] "RemoveContainer" containerID="83069a1a0084abec38cf08ffc3864c6dc387ece2da2dbaeed14a4f8878ec03d9" Dec 03 14:20:53.662921 master-0 kubenswrapper[4430]: I1203 14:20:53.662893 4430 generic.go:334] "Generic (PLEG): container finished" podID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" containerID="40fb59cfd74e6041170c92f1ee6dc799a6c5d221e4d7a05d6614563d67bc4a19" exitCode=0 Dec 03 14:20:53.663007 master-0 kubenswrapper[4430]: I1203 14:20:53.662961 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerDied","Data":"40fb59cfd74e6041170c92f1ee6dc799a6c5d221e4d7a05d6614563d67bc4a19"} Dec 03 14:20:53.663911 master-0 kubenswrapper[4430]: I1203 14:20:53.663880 4430 scope.go:117] "RemoveContainer" containerID="40fb59cfd74e6041170c92f1ee6dc799a6c5d221e4d7a05d6614563d67bc4a19" Dec 03 14:20:53.665764 master-0 kubenswrapper[4430]: I1203 14:20:53.665732 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/cluster-policy-controller/1.log" Dec 03 14:20:53.667060 master-0 kubenswrapper[4430]: I1203 14:20:53.667032 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/0.log" Dec 03 14:20:53.667558 master-0 kubenswrapper[4430]: I1203 14:20:53.667521 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc" exitCode=0 Dec 03 14:20:53.667558 master-0 kubenswrapper[4430]: I1203 14:20:53.667552 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3" exitCode=0 Dec 03 14:20:53.667558 master-0 kubenswrapper[4430]: I1203 14:20:53.667560 4430 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769" exitCode=2 Dec 03 14:20:53.667690 master-0 kubenswrapper[4430]: I1203 14:20:53.667595 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc"} Dec 03 14:20:53.667690 master-0 kubenswrapper[4430]: I1203 14:20:53.667631 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3"} Dec 03 14:20:53.667690 master-0 kubenswrapper[4430]: I1203 14:20:53.667641 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769"} Dec 03 14:20:53.669869 master-0 kubenswrapper[4430]: I1203 14:20:53.669832 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-66f4cc99d4-x278n_ab40dfa2-d8f8-4300-8a10-5aa73e1d6294/control-plane-machine-set-operator/1.log" Dec 03 14:20:53.669937 master-0 kubenswrapper[4430]: I1203 14:20:53.669890 4430 generic.go:334] "Generic (PLEG): container finished" podID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" containerID="302044d7c6d769d45230c8541ec0a0346f7987ab87dc02466826c5647a15464c" exitCode=0 Dec 03 14:20:53.670011 master-0 kubenswrapper[4430]: I1203 14:20:53.669951 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerDied","Data":"302044d7c6d769d45230c8541ec0a0346f7987ab87dc02466826c5647a15464c"} Dec 03 14:20:53.670577 master-0 kubenswrapper[4430]: I1203 14:20:53.670541 4430 scope.go:117] "RemoveContainer" containerID="302044d7c6d769d45230c8541ec0a0346f7987ab87dc02466826c5647a15464c" Dec 03 14:20:53.670823 master-0 kubenswrapper[4430]: E1203 14:20:53.670789 4430 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-66f4cc99d4-x278n_openshift-machine-api(ab40dfa2-d8f8-4300-8a10-5aa73e1d6294)\"" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:20:53.672429 master-0 kubenswrapper[4430]: I1203 14:20:53.672387 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovnkube-controller/1.log" Dec 03 14:20:53.675872 master-0 kubenswrapper[4430]: I1203 14:20:53.675852 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovn-acl-logging/1.log" Dec 03 14:20:53.677463 master-0 kubenswrapper[4430]: I1203 14:20:53.677362 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-txl6b_77430348-b53a-4898-8047-be8bb542a0a7/ovn-controller/2.log" Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679029 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="876e3a6d236e0be4c450dc348094b65ed7c200ebe5e36f5297e4821af364dfde" exitCode=2 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679049 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="f6fc0b8da5448f87611e04db90aca266d162bc2637b73ede6c1ca2a74107e8f9" exitCode=0 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679058 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="7df03249c6c36a8bedc8b2855a0ac7732b8b760292170d049984dc2323c4c36c" exitCode=0 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679065 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="b52e70a2fd68cf19fc245924194323a013951ec6f99b3a5e99b3a9580cd13ee0" exitCode=0 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679072 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="4f4048bb8a9818d0d6d08b3a0c4266128e22b30fad60e11c85437aeb1c539071" exitCode=0 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679079 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="91af6ef7f9da44b2fd5666c3d41bbac57f3b1b2e9b53696653ba5f67acb275c2" exitCode=143 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679086 4430 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="9d0e540b9c2e29f516a9bcf74d0ff9cb2c9f714e5085f646fe71482827081d16" exitCode=143 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679118 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"876e3a6d236e0be4c450dc348094b65ed7c200ebe5e36f5297e4821af364dfde"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679171 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"f6fc0b8da5448f87611e04db90aca266d162bc2637b73ede6c1ca2a74107e8f9"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679187 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"7df03249c6c36a8bedc8b2855a0ac7732b8b760292170d049984dc2323c4c36c"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679200 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"b52e70a2fd68cf19fc245924194323a013951ec6f99b3a5e99b3a9580cd13ee0"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679210 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"4f4048bb8a9818d0d6d08b3a0c4266128e22b30fad60e11c85437aeb1c539071"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679220 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"91af6ef7f9da44b2fd5666c3d41bbac57f3b1b2e9b53696653ba5f67acb275c2"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.679228 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"9d0e540b9c2e29f516a9bcf74d0ff9cb2c9f714e5085f646fe71482827081d16"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.682620 4430 generic.go:334] "Generic (PLEG): container finished" podID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" containerID="e2226b1e6fbbce79a23d04c546c8f3a797f1ed00bbf04ce53e482ad645f13380" exitCode=0 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.682681 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerDied","Data":"e2226b1e6fbbce79a23d04c546c8f3a797f1ed00bbf04ce53e482ad645f13380"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.683045 4430 scope.go:117] "RemoveContainer" containerID="e2226b1e6fbbce79a23d04c546c8f3a797f1ed00bbf04ce53e482ad645f13380" Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.684841 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-547cc9cc49-kqs4k_b02244d0-f4ef-4702-950d-9e3fb5ced128/monitoring-plugin/2.log" Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.684866 4430 generic.go:334] "Generic (PLEG): container finished" podID="b02244d0-f4ef-4702-950d-9e3fb5ced128" containerID="d079cea270d0cfe6e45724c631ac62ac89c7a513e0ce5e9badad54e53fa31429" exitCode=2 Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.684902 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerDied","Data":"d079cea270d0cfe6e45724c631ac62ac89c7a513e0ce5e9badad54e53fa31429"} Dec 03 14:20:53.685999 master-0 kubenswrapper[4430]: I1203 14:20:53.685153 4430 scope.go:117] "RemoveContainer" containerID="d079cea270d0cfe6e45724c631ac62ac89c7a513e0ce5e9badad54e53fa31429" Dec 03 14:20:53.687445 master-0 kubenswrapper[4430]: I1203 14:20:53.687402 4430 generic.go:334] "Generic (PLEG): container finished" podID="b1b3ab29-77cf-48ac-8881-846c46bb9048" containerID="c9517578dc034f0c98dd71c22869d7f9997507ac06ea22d00ae1520c380d0e69" exitCode=0 Dec 03 14:20:53.687507 master-0 kubenswrapper[4430]: I1203 14:20:53.687474 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" event={"ID":"b1b3ab29-77cf-48ac-8881-846c46bb9048","Type":"ContainerDied","Data":"c9517578dc034f0c98dd71c22869d7f9997507ac06ea22d00ae1520c380d0e69"} Dec 03 14:20:53.688002 master-0 kubenswrapper[4430]: I1203 14:20:53.687972 4430 scope.go:117] "RemoveContainer" containerID="c9517578dc034f0c98dd71c22869d7f9997507ac06ea22d00ae1520c380d0e69" Dec 03 14:20:53.694283 master-0 kubenswrapper[4430]: I1203 14:20:53.692978 4430 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="343824e3f1b00694b3e683568986ea8999f4a2938d0db2b6054009935de2fe35" exitCode=0 Dec 03 14:20:53.694283 master-0 kubenswrapper[4430]: I1203 14:20:53.693062 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"343824e3f1b00694b3e683568986ea8999f4a2938d0db2b6054009935de2fe35"} Dec 03 14:20:53.694430 master-0 kubenswrapper[4430]: I1203 14:20:53.694288 4430 scope.go:117] "RemoveContainer" containerID="343824e3f1b00694b3e683568986ea8999f4a2938d0db2b6054009935de2fe35" Dec 03 14:20:53.695007 master-0 kubenswrapper[4430]: I1203 14:20:53.694937 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-57cbc648f8-q4cgg_74e39dce-29d5-4b2a-ab19-386b6cdae94d/openshift-state-metrics/1.log" Dec 03 14:20:53.695916 master-0 kubenswrapper[4430]: I1203 14:20:53.695870 4430 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="b47333ee9ba830a9af683a9b0c9324b71e1b375dad853a63c54e3c6a8cb148a6" exitCode=2 Dec 03 14:20:53.695916 master-0 kubenswrapper[4430]: I1203 14:20:53.695893 4430 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="72559983816bdf6dfe237948f445318c22392716c2ae4897c16037196621efe1" exitCode=0 Dec 03 14:20:53.695916 master-0 kubenswrapper[4430]: I1203 14:20:53.695904 4430 generic.go:334] "Generic (PLEG): container finished" podID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" containerID="52313725c2028dfbc2904f160605184b179c485f5bc888ed26c9c71dd42dc37c" exitCode=0 Dec 03 14:20:53.696028 master-0 kubenswrapper[4430]: I1203 14:20:53.695962 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"b47333ee9ba830a9af683a9b0c9324b71e1b375dad853a63c54e3c6a8cb148a6"} Dec 03 14:20:53.696028 master-0 kubenswrapper[4430]: I1203 14:20:53.695986 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"72559983816bdf6dfe237948f445318c22392716c2ae4897c16037196621efe1"} Dec 03 14:20:53.696028 master-0 kubenswrapper[4430]: I1203 14:20:53.696002 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerDied","Data":"52313725c2028dfbc2904f160605184b179c485f5bc888ed26c9c71dd42dc37c"} Dec 03 14:20:53.696642 master-0 kubenswrapper[4430]: I1203 14:20:53.696615 4430 scope.go:117] "RemoveContainer" containerID="52313725c2028dfbc2904f160605184b179c485f5bc888ed26c9c71dd42dc37c" Dec 03 14:20:53.697338 master-0 kubenswrapper[4430]: I1203 14:20:53.696644 4430 scope.go:117] "RemoveContainer" containerID="72559983816bdf6dfe237948f445318c22392716c2ae4897c16037196621efe1" Dec 03 14:20:53.697401 master-0 kubenswrapper[4430]: I1203 14:20:53.697343 4430 scope.go:117] "RemoveContainer" containerID="b47333ee9ba830a9af683a9b0c9324b71e1b375dad853a63c54e3c6a8cb148a6" Dec 03 14:20:53.698573 master-0 kubenswrapper[4430]: I1203 14:20:53.698545 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-59d99f9b7b-74sss_c95705e3-17ef-40fe-89e8-22586a32621b/insights-operator/3.log" Dec 03 14:20:53.698626 master-0 kubenswrapper[4430]: I1203 14:20:53.698584 4430 generic.go:334] "Generic (PLEG): container finished" podID="c95705e3-17ef-40fe-89e8-22586a32621b" containerID="c123b23ebe0056270214fb7a2c5e5f02c8177eacf32ad97b9e51ac2d145232fa" exitCode=2 Dec 03 14:20:53.698662 master-0 kubenswrapper[4430]: I1203 14:20:53.698633 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerDied","Data":"c123b23ebe0056270214fb7a2c5e5f02c8177eacf32ad97b9e51ac2d145232fa"} Dec 03 14:20:53.698987 master-0 kubenswrapper[4430]: I1203 14:20:53.698955 4430 scope.go:117] "RemoveContainer" containerID="c123b23ebe0056270214fb7a2c5e5f02c8177eacf32ad97b9e51ac2d145232fa" Dec 03 14:20:53.702687 master-0 kubenswrapper[4430]: I1203 14:20:53.702656 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/5.log" Dec 03 14:20:53.703876 master-0 kubenswrapper[4430]: I1203 14:20:53.703839 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5fdc576499-j2n8j_690d1f81-7b1f-4fd0-9b6e-154c9687c744/cluster-baremetal-operator/4.log" Dec 03 14:20:53.704381 master-0 kubenswrapper[4430]: I1203 14:20:53.704336 4430 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="0c52434e813a20a1a4ecd0980a8da617dc528ac101149a794f0f4e66aaba1148" exitCode=2 Dec 03 14:20:53.704381 master-0 kubenswrapper[4430]: I1203 14:20:53.704370 4430 generic.go:334] "Generic (PLEG): container finished" podID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" containerID="85b3b387a530aa9d604f58b1466a82ed6119e4fd14759d50e69373a700eb6343" exitCode=0 Dec 03 14:20:53.704500 master-0 kubenswrapper[4430]: I1203 14:20:53.704426 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"0c52434e813a20a1a4ecd0980a8da617dc528ac101149a794f0f4e66aaba1148"} Dec 03 14:20:53.704500 master-0 kubenswrapper[4430]: I1203 14:20:53.704474 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerDied","Data":"85b3b387a530aa9d604f58b1466a82ed6119e4fd14759d50e69373a700eb6343"} Dec 03 14:20:53.705059 master-0 kubenswrapper[4430]: I1203 14:20:53.705040 4430 scope.go:117] "RemoveContainer" containerID="0c52434e813a20a1a4ecd0980a8da617dc528ac101149a794f0f4e66aaba1148" Dec 03 14:20:53.705146 master-0 kubenswrapper[4430]: I1203 14:20:53.705066 4430 scope.go:117] "RemoveContainer" containerID="85b3b387a530aa9d604f58b1466a82ed6119e4fd14759d50e69373a700eb6343" Dec 03 14:20:53.706282 master-0 kubenswrapper[4430]: I1203 14:20:53.706264 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-rev/0.log" Dec 03 14:20:53.707093 master-0 kubenswrapper[4430]: I1203 14:20:53.707077 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-metrics/0.log" Dec 03 14:20:53.709154 master-0 kubenswrapper[4430]: I1203 14:20:53.709095 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="dc5f4e8ba796d8365c4ac4c987abb1a3246149be979ebd7c40b447b9dcbbb050" exitCode=2 Dec 03 14:20:53.709154 master-0 kubenswrapper[4430]: I1203 14:20:53.709126 4430 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="b583c0349f59fc52b4ab436b3e5a0a2ea570336dd66ccb6a9e86fd60a78b1112" exitCode=0 Dec 03 14:20:53.709216 master-0 kubenswrapper[4430]: I1203 14:20:53.709152 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"dc5f4e8ba796d8365c4ac4c987abb1a3246149be979ebd7c40b447b9dcbbb050"} Dec 03 14:20:53.709216 master-0 kubenswrapper[4430]: I1203 14:20:53.709196 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"b583c0349f59fc52b4ab436b3e5a0a2ea570336dd66ccb6a9e86fd60a78b1112"} Dec 03 14:20:53.712336 master-0 kubenswrapper[4430]: I1203 14:20:53.712014 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-vkpv4_e3675c78-1902-4b92-8a93-cf2dc316f060/serve-healthcheck-canary/2.log" Dec 03 14:20:53.712336 master-0 kubenswrapper[4430]: I1203 14:20:53.712061 4430 generic.go:334] "Generic (PLEG): container finished" podID="e3675c78-1902-4b92-8a93-cf2dc316f060" containerID="6e7b2d866bacc29c710bc8ce7afa7027ba2c2cc52fe1683cdedd96674d20b68e" exitCode=2 Dec 03 14:20:53.712336 master-0 kubenswrapper[4430]: I1203 14:20:53.712117 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerDied","Data":"6e7b2d866bacc29c710bc8ce7afa7027ba2c2cc52fe1683cdedd96674d20b68e"} Dec 03 14:20:53.712589 master-0 kubenswrapper[4430]: I1203 14:20:53.712557 4430 scope.go:117] "RemoveContainer" containerID="6e7b2d866bacc29c710bc8ce7afa7027ba2c2cc52fe1683cdedd96674d20b68e" Dec 03 14:20:53.715729 master-0 kubenswrapper[4430]: I1203 14:20:53.715690 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/2.log" Dec 03 14:20:53.716567 master-0 kubenswrapper[4430]: I1203 14:20:53.716541 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/1.log" Dec 03 14:20:53.717218 master-0 kubenswrapper[4430]: I1203 14:20:53.717190 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler/2.log" Dec 03 14:20:53.717843 master-0 kubenswrapper[4430]: I1203 14:20:53.717797 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4" exitCode=0 Dec 03 14:20:53.717843 master-0 kubenswrapper[4430]: I1203 14:20:53.717832 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c" exitCode=2 Dec 03 14:20:53.717843 master-0 kubenswrapper[4430]: I1203 14:20:53.717839 4430 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3" exitCode=0 Dec 03 14:20:53.718021 master-0 kubenswrapper[4430]: I1203 14:20:53.717881 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4"} Dec 03 14:20:53.718021 master-0 kubenswrapper[4430]: I1203 14:20:53.717958 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c"} Dec 03 14:20:53.718021 master-0 kubenswrapper[4430]: I1203 14:20:53.717975 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3"} Dec 03 14:20:53.718656 master-0 kubenswrapper[4430]: I1203 14:20:53.718617 4430 scope.go:117] "RemoveContainer" containerID="fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4" Dec 03 14:20:53.718656 master-0 kubenswrapper[4430]: I1203 14:20:53.718654 4430 scope.go:117] "RemoveContainer" containerID="a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c" Dec 03 14:20:53.718761 master-0 kubenswrapper[4430]: I1203 14:20:53.718667 4430 scope.go:117] "RemoveContainer" containerID="a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3" Dec 03 14:20:53.720694 master-0 kubenswrapper[4430]: I1203 14:20:53.720663 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-77df56447c-vsrxx_a8dc6511-7339-4269-9d43-14ce53bb4e7f/console-operator/2.log" Dec 03 14:20:53.720741 master-0 kubenswrapper[4430]: I1203 14:20:53.720716 4430 generic.go:334] "Generic (PLEG): container finished" podID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" containerID="c809768d9f5c1be51c102ade94741e729d18869874debc05a750c4f2d9789d3d" exitCode=1 Dec 03 14:20:53.721062 master-0 kubenswrapper[4430]: I1203 14:20:53.720789 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerDied","Data":"c809768d9f5c1be51c102ade94741e729d18869874debc05a750c4f2d9789d3d"} Dec 03 14:20:53.722772 master-0 kubenswrapper[4430]: I1203 14:20:53.722748 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-pcchm_6d38d102-4efe-4ed3-ae23-b1e295cdaccd/network-check-target-container/2.log" Dec 03 14:20:53.722816 master-0 kubenswrapper[4430]: I1203 14:20:53.722784 4430 generic.go:334] "Generic (PLEG): container finished" podID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" containerID="9cb970a0ff4f56774de29ca6c880effdf313b3b85e4caf3d9d771902b809383e" exitCode=2 Dec 03 14:20:53.722866 master-0 kubenswrapper[4430]: I1203 14:20:53.722847 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerDied","Data":"9cb970a0ff4f56774de29ca6c880effdf313b3b85e4caf3d9d771902b809383e"} Dec 03 14:20:53.723024 master-0 kubenswrapper[4430]: I1203 14:20:53.722989 4430 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:20:53.723210 master-0 kubenswrapper[4430]: I1203 14:20:53.723185 4430 scope.go:117] "RemoveContainer" containerID="9cb970a0ff4f56774de29ca6c880effdf313b3b85e4caf3d9d771902b809383e" Dec 03 14:20:53.723989 master-0 kubenswrapper[4430]: I1203 14:20:53.723957 4430 scope.go:117] "RemoveContainer" containerID="c809768d9f5c1be51c102ade94741e729d18869874debc05a750c4f2d9789d3d" Dec 03 14:20:53.725863 master-0 kubenswrapper[4430]: I1203 14:20:53.725826 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c9c84854-xf7nv_8b442f72-b113-4227-93b5-ea1ae90d5154/console/0.log" Dec 03 14:20:53.725930 master-0 kubenswrapper[4430]: I1203 14:20:53.725878 4430 generic.go:334] "Generic (PLEG): container finished" podID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerID="7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae" exitCode=2 Dec 03 14:20:53.726007 master-0 kubenswrapper[4430]: I1203 14:20:53.725977 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerDied","Data":"7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae"} Dec 03 14:20:53.726394 master-0 kubenswrapper[4430]: I1203 14:20:53.726368 4430 scope.go:117] "RemoveContainer" containerID="7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae" Dec 03 14:20:53.727788 master-0 kubenswrapper[4430]: I1203 14:20:53.727748 4430 generic.go:334] "Generic (PLEG): container finished" podID="faa79e15-1875-4865-b5e0-aecd4c447bad" containerID="44353bbb262f85a0b0d8143ae789fbdfa6f75ad1779eb1521fb956532d8d3c62" exitCode=0 Dec 03 14:20:53.727843 master-0 kubenswrapper[4430]: I1203 14:20:53.727818 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerDied","Data":"44353bbb262f85a0b0d8143ae789fbdfa6f75ad1779eb1521fb956532d8d3c62"} Dec 03 14:20:53.728189 master-0 kubenswrapper[4430]: I1203 14:20:53.728163 4430 scope.go:117] "RemoveContainer" containerID="44353bbb262f85a0b0d8143ae789fbdfa6f75ad1779eb1521fb956532d8d3c62" Dec 03 14:20:53.729293 master-0 kubenswrapper[4430]: I1203 14:20:53.729267 4430 generic.go:334] "Generic (PLEG): container finished" podID="b3eef3ef-f954-4e47-92b4-0155bc27332d" containerID="cd2b922ff000e023561fd4d7288f18b8e8f2cf57cb7e245f6f79c898b877df6c" exitCode=0 Dec 03 14:20:53.729369 master-0 kubenswrapper[4430]: I1203 14:20:53.729333 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerDied","Data":"cd2b922ff000e023561fd4d7288f18b8e8f2cf57cb7e245f6f79c898b877df6c"} Dec 03 14:20:53.730052 master-0 kubenswrapper[4430]: I1203 14:20:53.730025 4430 scope.go:117] "RemoveContainer" containerID="cd2b922ff000e023561fd4d7288f18b8e8f2cf57cb7e245f6f79c898b877df6c" Dec 03 14:20:53.731278 master-0 kubenswrapper[4430]: I1203 14:20:53.731244 4430 generic.go:334] "Generic (PLEG): container finished" podID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" containerID="8a576ad2831821ea4b6d5602aa70010b115a4dc548df83ef9a7f154f24e78877" exitCode=0 Dec 03 14:20:53.731337 master-0 kubenswrapper[4430]: I1203 14:20:53.731313 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerDied","Data":"8a576ad2831821ea4b6d5602aa70010b115a4dc548df83ef9a7f154f24e78877"} Dec 03 14:20:53.731873 master-0 kubenswrapper[4430]: I1203 14:20:53.731821 4430 scope.go:117] "RemoveContainer" containerID="8a576ad2831821ea4b6d5602aa70010b115a4dc548df83ef9a7f154f24e78877" Dec 03 14:20:53.733424 master-0 kubenswrapper[4430]: I1203 14:20:53.733393 4430 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="670178a9112ded1df5b4df71d85f8bbdc6dc3eaee7dcf3f04f4418c84722164a" exitCode=0 Dec 03 14:20:53.733521 master-0 kubenswrapper[4430]: I1203 14:20:53.733451 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"670178a9112ded1df5b4df71d85f8bbdc6dc3eaee7dcf3f04f4418c84722164a"} Dec 03 14:20:53.733843 master-0 kubenswrapper[4430]: I1203 14:20:53.733816 4430 scope.go:117] "RemoveContainer" containerID="670178a9112ded1df5b4df71d85f8bbdc6dc3eaee7dcf3f04f4418c84722164a" Dec 03 14:20:53.740762 master-0 kubenswrapper[4430]: I1203 14:20:53.740726 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="665ad677aaa74ebbffda03136c226e5418f7caeb08ac0c3d1e8ee4e37c185fd5" exitCode=0 Dec 03 14:20:53.740762 master-0 kubenswrapper[4430]: I1203 14:20:53.740753 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="51da4cadfe494abf9d71139ebc5ee02a538f34346eee9d660a0740d99613a7ee" exitCode=0 Dec 03 14:20:53.740762 master-0 kubenswrapper[4430]: I1203 14:20:53.740761 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="acab1430bd92b580220f13247b2a261291dbaed1747728645f8c995585d74972" exitCode=0 Dec 03 14:20:53.740904 master-0 kubenswrapper[4430]: I1203 14:20:53.740769 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="cd94bbbc8c57e2e3fade7ecec5038282abb27807ec06c1f19cbee3b4e3fc8fb3" exitCode=0 Dec 03 14:20:53.740904 master-0 kubenswrapper[4430]: I1203 14:20:53.740777 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="b64f5f125b688edf175adfcffc5124baddf49d5600e593c50b76523fed1745a9" exitCode=0 Dec 03 14:20:53.740904 master-0 kubenswrapper[4430]: I1203 14:20:53.740784 4430 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="b6466aa546730021e5613ad7d04a35e0e9f305a56aebdaaa8f3e64231dd4ef70" exitCode=0 Dec 03 14:20:53.740904 master-0 kubenswrapper[4430]: I1203 14:20:53.740827 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"665ad677aaa74ebbffda03136c226e5418f7caeb08ac0c3d1e8ee4e37c185fd5"} Dec 03 14:20:53.741034 master-0 kubenswrapper[4430]: I1203 14:20:53.740906 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"51da4cadfe494abf9d71139ebc5ee02a538f34346eee9d660a0740d99613a7ee"} Dec 03 14:20:53.741034 master-0 kubenswrapper[4430]: I1203 14:20:53.740936 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"acab1430bd92b580220f13247b2a261291dbaed1747728645f8c995585d74972"} Dec 03 14:20:53.741034 master-0 kubenswrapper[4430]: I1203 14:20:53.740950 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"cd94bbbc8c57e2e3fade7ecec5038282abb27807ec06c1f19cbee3b4e3fc8fb3"} Dec 03 14:20:53.741034 master-0 kubenswrapper[4430]: I1203 14:20:53.740965 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"b64f5f125b688edf175adfcffc5124baddf49d5600e593c50b76523fed1745a9"} Dec 03 14:20:53.741034 master-0 kubenswrapper[4430]: I1203 14:20:53.740990 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"b6466aa546730021e5613ad7d04a35e0e9f305a56aebdaaa8f3e64231dd4ef70"} Dec 03 14:20:53.741498 master-0 kubenswrapper[4430]: I1203 14:20:53.741470 4430 scope.go:117] "RemoveContainer" containerID="b6466aa546730021e5613ad7d04a35e0e9f305a56aebdaaa8f3e64231dd4ef70" Dec 03 14:20:53.741536 master-0 kubenswrapper[4430]: I1203 14:20:53.741499 4430 scope.go:117] "RemoveContainer" containerID="b64f5f125b688edf175adfcffc5124baddf49d5600e593c50b76523fed1745a9" Dec 03 14:20:53.741536 master-0 kubenswrapper[4430]: I1203 14:20:53.741513 4430 scope.go:117] "RemoveContainer" containerID="cd94bbbc8c57e2e3fade7ecec5038282abb27807ec06c1f19cbee3b4e3fc8fb3" Dec 03 14:20:53.741536 master-0 kubenswrapper[4430]: I1203 14:20:53.741523 4430 scope.go:117] "RemoveContainer" containerID="acab1430bd92b580220f13247b2a261291dbaed1747728645f8c995585d74972" Dec 03 14:20:53.741536 master-0 kubenswrapper[4430]: I1203 14:20:53.741533 4430 scope.go:117] "RemoveContainer" containerID="51da4cadfe494abf9d71139ebc5ee02a538f34346eee9d660a0740d99613a7ee" Dec 03 14:20:53.741661 master-0 kubenswrapper[4430]: I1203 14:20:53.741543 4430 scope.go:117] "RemoveContainer" containerID="665ad677aaa74ebbffda03136c226e5418f7caeb08ac0c3d1e8ee4e37c185fd5" Dec 03 14:20:53.744562 master-0 kubenswrapper[4430]: I1203 14:20:53.744531 4430 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" exitCode=0 Dec 03 14:20:53.744642 master-0 kubenswrapper[4430]: I1203 14:20:53.744600 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e"} Dec 03 14:20:53.744972 master-0 kubenswrapper[4430]: I1203 14:20:53.744939 4430 scope.go:117] "RemoveContainer" containerID="8ad4920c5010b0491b1d1602cda8a07ad7c0120c8071e1e5fd94755f1245be1e" Dec 03 14:20:53.750016 master-0 kubenswrapper[4430]: I1203 14:20:53.748023 4430 generic.go:334] "Generic (PLEG): container finished" podID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerID="6f164a6df1bde5320ab22b53ded4e042f338dc219f6766493bf70ff678182ddd" exitCode=0 Dec 03 14:20:53.750016 master-0 kubenswrapper[4430]: I1203 14:20:53.748094 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerDied","Data":"6f164a6df1bde5320ab22b53ded4e042f338dc219f6766493bf70ff678182ddd"} Dec 03 14:20:53.750016 master-0 kubenswrapper[4430]: I1203 14:20:53.748468 4430 scope.go:117] "RemoveContainer" containerID="6f164a6df1bde5320ab22b53ded4e042f338dc219f6766493bf70ff678182ddd" Dec 03 14:20:53.753551 master-0 kubenswrapper[4430]: I1203 14:20:53.753443 4430 generic.go:334] "Generic (PLEG): container finished" podID="4df2889c-99f7-402a-9d50-18ccf427179c" containerID="17a5cd44c392850f434f61e5e79c39571ac458da606d60e15fa99372bc690af8" exitCode=0 Dec 03 14:20:53.753551 master-0 kubenswrapper[4430]: I1203 14:20:53.753486 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerDied","Data":"17a5cd44c392850f434f61e5e79c39571ac458da606d60e15fa99372bc690af8"} Dec 03 14:20:53.754340 master-0 kubenswrapper[4430]: I1203 14:20:53.754313 4430 scope.go:117] "RemoveContainer" containerID="17a5cd44c392850f434f61e5e79c39571ac458da606d60e15fa99372bc690af8" Dec 03 14:20:53.756169 master-0 kubenswrapper[4430]: I1203 14:20:53.755691 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/2.log" Dec 03 14:20:53.756722 master-0 kubenswrapper[4430]: I1203 14:20:53.756297 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-754cfd84-qf898_69b752ed-691c-4574-a01e-428d4bf85b75/manager/1.log" Dec 03 14:20:53.756952 master-0 kubenswrapper[4430]: I1203 14:20:53.756878 4430 generic.go:334] "Generic (PLEG): container finished" podID="69b752ed-691c-4574-a01e-428d4bf85b75" containerID="c127d86f25382d0f34fa623953455aa5a7250f26a5d55cdc092b594a25f7b6f6" exitCode=0 Dec 03 14:20:53.757002 master-0 kubenswrapper[4430]: I1203 14:20:53.756962 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerDied","Data":"c127d86f25382d0f34fa623953455aa5a7250f26a5d55cdc092b594a25f7b6f6"} Dec 03 14:20:53.757591 master-0 kubenswrapper[4430]: I1203 14:20:53.757529 4430 scope.go:117] "RemoveContainer" containerID="c127d86f25382d0f34fa623953455aa5a7250f26a5d55cdc092b594a25f7b6f6" Dec 03 14:20:53.757591 master-0 kubenswrapper[4430]: I1203 14:20:53.757589 4430 scope.go:117] "RemoveContainer" containerID="dc551c9a615037c7c7b838a40e2ff9662582fd1a85bcf96c80ad2921e9fffc09" Dec 03 14:20:53.759085 master-0 kubenswrapper[4430]: I1203 14:20:53.759040 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-n24qb_6ef37bba-85d9-4303-80c0-aac3dc49d3d9/iptables-alerter/3.log" Dec 03 14:20:53.759165 master-0 kubenswrapper[4430]: I1203 14:20:53.759098 4430 generic.go:334] "Generic (PLEG): container finished" podID="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" containerID="887ada6287d98232addb4779c33abf88bd14342273f22a3807dcefc91a0fd10d" exitCode=143 Dec 03 14:20:53.759219 master-0 kubenswrapper[4430]: I1203 14:20:53.759189 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerDied","Data":"887ada6287d98232addb4779c33abf88bd14342273f22a3807dcefc91a0fd10d"} Dec 03 14:20:53.759737 master-0 kubenswrapper[4430]: I1203 14:20:53.759715 4430 scope.go:117] "RemoveContainer" containerID="887ada6287d98232addb4779c33abf88bd14342273f22a3807dcefc91a0fd10d" Dec 03 14:20:53.762937 master-0 kubenswrapper[4430]: I1203 14:20:53.762896 4430 generic.go:334] "Generic (PLEG): container finished" podID="6935a3f8-723e-46e6-8498-483f34bf0825" containerID="153f16943f56e9f517fe3bb5b6cd273abc62130b5202c99cf6cd2be09af776cd" exitCode=0 Dec 03 14:20:53.763017 master-0 kubenswrapper[4430]: I1203 14:20:53.762946 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerDied","Data":"153f16943f56e9f517fe3bb5b6cd273abc62130b5202c99cf6cd2be09af776cd"} Dec 03 14:20:53.763520 master-0 kubenswrapper[4430]: I1203 14:20:53.763303 4430 scope.go:117] "RemoveContainer" containerID="153f16943f56e9f517fe3bb5b6cd273abc62130b5202c99cf6cd2be09af776cd" Dec 03 14:20:53.767412 master-0 kubenswrapper[4430]: I1203 14:20:53.767384 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/3.log" Dec 03 14:20:53.767972 master-0 kubenswrapper[4430]: I1203 14:20:53.767940 4430 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="f13652d4628e63bd07a9fa5dbb77c63632f6698856c14038338f8bbd246f113a" exitCode=0 Dec 03 14:20:53.767972 master-0 kubenswrapper[4430]: I1203 14:20:53.767967 4430 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="d2ca2ee49f1caf825f3be17bc4c4d0dd12b887ed189295e71da9c01631da67fc" exitCode=0 Dec 03 14:20:53.768089 master-0 kubenswrapper[4430]: I1203 14:20:53.768053 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"f13652d4628e63bd07a9fa5dbb77c63632f6698856c14038338f8bbd246f113a"} Dec 03 14:20:53.768202 master-0 kubenswrapper[4430]: I1203 14:20:53.768110 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"d2ca2ee49f1caf825f3be17bc4c4d0dd12b887ed189295e71da9c01631da67fc"} Dec 03 14:20:53.768900 master-0 kubenswrapper[4430]: I1203 14:20:53.768875 4430 scope.go:117] "RemoveContainer" containerID="d2ca2ee49f1caf825f3be17bc4c4d0dd12b887ed189295e71da9c01631da67fc" Dec 03 14:20:53.768900 master-0 kubenswrapper[4430]: I1203 14:20:53.768901 4430 scope.go:117] "RemoveContainer" containerID="f13652d4628e63bd07a9fa5dbb77c63632f6698856c14038338f8bbd246f113a" Dec 03 14:20:53.772770 master-0 kubenswrapper[4430]: I1203 14:20:53.771940 4430 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-42hmk_19c2a40b-213c-42f1-9459-87c2e780a75f/kube-multus-additional-cni-plugins/2.log" Dec 03 14:20:53.775383 master-0 kubenswrapper[4430]: I1203 14:20:53.775343 4430 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="6c58206c8f470d87dc9d5a570f4eaec4253acfe9d6743f8cfb025a93e6bf3be3" exitCode=143 Dec 03 14:20:53.775652 master-0 kubenswrapper[4430]: I1203 14:20:53.775512 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"6c58206c8f470d87dc9d5a570f4eaec4253acfe9d6743f8cfb025a93e6bf3be3"} Dec 03 14:20:53.776207 master-0 kubenswrapper[4430]: I1203 14:20:53.776129 4430 scope.go:117] "RemoveContainer" containerID="6c58206c8f470d87dc9d5a570f4eaec4253acfe9d6743f8cfb025a93e6bf3be3" Dec 03 14:20:53.777841 master-0 kubenswrapper[4430]: I1203 14:20:53.777718 4430 generic.go:334] "Generic (PLEG): container finished" podID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" containerID="46872b6388f4e7de23159cbce28c7d7790585a0246ce8446e98add9bf9847a50" exitCode=0 Dec 03 14:20:53.777841 master-0 kubenswrapper[4430]: I1203 14:20:53.777793 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerDied","Data":"46872b6388f4e7de23159cbce28c7d7790585a0246ce8446e98add9bf9847a50"} Dec 03 14:20:53.778503 master-0 kubenswrapper[4430]: I1203 14:20:53.778481 4430 scope.go:117] "RemoveContainer" containerID="46872b6388f4e7de23159cbce28c7d7790585a0246ce8446e98add9bf9847a50" Dec 03 14:20:53.781255 master-0 kubenswrapper[4430]: I1203 14:20:53.781229 4430 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" exitCode=0 Dec 03 14:20:53.781305 master-0 kubenswrapper[4430]: I1203 14:20:53.781256 4430 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503"} Dec 03 14:20:53.781709 master-0 kubenswrapper[4430]: I1203 14:20:53.781682 4430 scope.go:117] "RemoveContainer" containerID="781b36a43b4c602a4dacd67966ecdaaf8ab1a93bf3a69c83b34d00721a1fc503" Dec 03 14:20:54.910039 master-0 kubenswrapper[4430]: I1203 14:20:54.909952 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:20:54.910039 master-0 kubenswrapper[4430]: I1203 14:20:54.910035 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:20:55.024659 master-0 kubenswrapper[4430]: I1203 14:20:55.024586 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:20:55.024659 master-0 kubenswrapper[4430]: I1203 14:20:55.024655 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:20:55.103057 master-0 kubenswrapper[4430]: I1203 14:20:55.102935 4430 patch_prober.go:28] interesting pod/package-server-manager-75b4d49d4c-h599p container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" start-of-body= Dec 03 14:20:55.103057 master-0 kubenswrapper[4430]: I1203 14:20:55.103019 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" Dec 03 14:20:55.103767 master-0 kubenswrapper[4430]: I1203 14:20:55.103218 4430 patch_prober.go:28] interesting pod/package-server-manager-75b4d49d4c-h599p container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" start-of-body= Dec 03 14:20:55.103767 master-0 kubenswrapper[4430]: I1203 14:20:55.103300 4430 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.128.0.18:8080/healthz\": dial tcp 10.128.0.18:8080: connect: connection refused" Dec 03 14:20:55.117839 master-0 kubenswrapper[4430]: I1203 14:20:55.117733 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:20:55.117839 master-0 kubenswrapper[4430]: I1203 14:20:55.117838 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:20:55.118863 master-0 kubenswrapper[4430]: I1203 14:20:55.118817 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:20:55.118961 master-0 kubenswrapper[4430]: I1203 14:20:55.118882 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:20:55.118961 master-0 kubenswrapper[4430]: I1203 14:20:55.118898 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:20:55.118961 master-0 kubenswrapper[4430]: I1203 14:20:55.118942 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:20:55.119161 master-0 kubenswrapper[4430]: I1203 14:20:55.119006 4430 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:20:55.119161 master-0 kubenswrapper[4430]: I1203 14:20:55.119044 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: I1203 14:20:55.165819 4430 patch_prober.go:28] interesting pod/apiserver-57fd58bc7b-kktql container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]log ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]etcd excluded: ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]etcd-readiness excluded: ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]informer-sync ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/max-in-flight-filter ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-StartUserInformer ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-StartOAuthInformer ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: [-]shutdown failed: reason withheld Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: readyz check failed Dec 03 14:20:55.165922 master-0 kubenswrapper[4430]: I1203 14:20:55.165919 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:20:55.519583 master-0 kubenswrapper[4430]: I1203 14:20:55.519480 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:20:55.608403 master-0 kubenswrapper[4430]: I1203 14:20:55.608208 4430 patch_prober.go:28] interesting pod/dns-default-5m4f8 container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" start-of-body= Dec 03 14:20:55.608403 master-0 kubenswrapper[4430]: I1203 14:20:55.608299 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.39:8181/ready\": dial tcp 10.128.0.39:8181: connect: connection refused" Dec 03 14:20:55.612399 master-0 kubenswrapper[4430]: I1203 14:20:55.612339 4430 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:20:55.612399 master-0 kubenswrapper[4430]: I1203 14:20:55.612392 4430 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: I1203 14:20:55.622802 4430 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]log ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]api-openshift-apiserver-available ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]api-openshift-oauth-apiserver-available ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]informer-sync ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/priority-and-fairness-filter ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-apiextensions-informers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-apiextensions-controllers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/crd-informer-synced ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-system-namespaces-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/rbac/bootstrap-roles ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/bootstrap-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/start-kube-aggregator-informers ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-registration-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-discovery-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]autoregister-completion ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-openapi-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: [-]shutdown failed: reason withheld Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: readyz check failed Dec 03 14:20:55.622914 master-0 kubenswrapper[4430]: I1203 14:20:55.622922 4430 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:20:55.849322 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:20:55.915971 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:20:55.916259 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:20:55.916762 master-0 systemd[1]: kubelet.service: Consumed 2min 54.775s CPU time. -- Boot 5a54df7864a74b65a168d6e871bf4ce7 -- Dec 03 14:25:39.446163 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:25:39.632175 master-0 kubenswrapper[3173]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: I1203 14:25:39.632463 3173 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635096 3173 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635115 3173 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635119 3173 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635124 3173 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635128 3173 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635132 3173 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635137 3173 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635140 3173 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635144 3173 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635148 3173 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635151 3173 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635155 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635158 3173 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635162 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635166 3173 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635169 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635172 3173 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:25:39.637712 master-0 kubenswrapper[3173]: W1203 14:25:39.635176 3173 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635180 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635184 3173 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635188 3173 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635191 3173 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635196 3173 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635200 3173 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635204 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635207 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635211 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635215 3173 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635219 3173 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635224 3173 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635228 3173 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635233 3173 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635237 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635240 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635244 3173 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635249 3173 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:25:39.638588 master-0 kubenswrapper[3173]: W1203 14:25:39.635252 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635256 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635260 3173 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635263 3173 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635267 3173 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635271 3173 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635275 3173 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635278 3173 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635282 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635286 3173 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635290 3173 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635293 3173 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635297 3173 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635300 3173 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635303 3173 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635307 3173 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635311 3173 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635314 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635318 3173 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635321 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:25:39.639100 master-0 kubenswrapper[3173]: W1203 14:25:39.635325 3173 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635328 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635332 3173 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635335 3173 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635338 3173 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635342 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635346 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635349 3173 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635353 3173 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635356 3173 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635359 3173 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635363 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635368 3173 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635373 3173 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635378 3173 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: W1203 14:25:39.635382 3173 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: I1203 14:25:39.635839 3173 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: I1203 14:25:39.635849 3173 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: I1203 14:25:39.635855 3173 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: I1203 14:25:39.635860 3173 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:25:39.639618 master-0 kubenswrapper[3173]: I1203 14:25:39.635869 3173 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635873 3173 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635885 3173 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635890 3173 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635895 3173 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635899 3173 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635903 3173 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635908 3173 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635912 3173 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635916 3173 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635920 3173 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635924 3173 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635928 3173 flags.go:64] FLAG: --cloud-config="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635932 3173 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635936 3173 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635941 3173 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635945 3173 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635949 3173 flags.go:64] FLAG: --config-dir="" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635953 3173 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635957 3173 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635962 3173 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635967 3173 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635971 3173 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635975 3173 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:25:39.640664 master-0 kubenswrapper[3173]: I1203 14:25:39.635979 3173 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.635983 3173 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.635987 3173 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.635991 3173 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.635995 3173 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636015 3173 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636019 3173 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636023 3173 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636029 3173 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636034 3173 flags.go:64] FLAG: --enable-server="true" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636037 3173 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636042 3173 flags.go:64] FLAG: --event-burst="100" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636047 3173 flags.go:64] FLAG: --event-qps="50" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636051 3173 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636055 3173 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636059 3173 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636064 3173 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636068 3173 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636072 3173 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636076 3173 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636080 3173 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636084 3173 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636087 3173 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636091 3173 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636095 3173 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:25:39.641295 master-0 kubenswrapper[3173]: I1203 14:25:39.636099 3173 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636103 3173 flags.go:64] FLAG: --feature-gates="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636108 3173 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636114 3173 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636118 3173 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636122 3173 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636126 3173 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636130 3173 flags.go:64] FLAG: --help="false" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636134 3173 flags.go:64] FLAG: --hostname-override="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636138 3173 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636142 3173 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636146 3173 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636150 3173 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636154 3173 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636158 3173 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636163 3173 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636168 3173 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636172 3173 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636176 3173 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636180 3173 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636184 3173 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636188 3173 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636192 3173 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636196 3173 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636199 3173 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636203 3173 flags.go:64] FLAG: --lock-file="" Dec 03 14:25:39.642094 master-0 kubenswrapper[3173]: I1203 14:25:39.636207 3173 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636211 3173 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636215 3173 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636221 3173 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636225 3173 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636229 3173 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636232 3173 flags.go:64] FLAG: --logging-format="text" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636236 3173 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636240 3173 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636245 3173 flags.go:64] FLAG: --manifest-url="" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636249 3173 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636254 3173 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636258 3173 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636263 3173 flags.go:64] FLAG: --max-pods="110" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636267 3173 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636271 3173 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636275 3173 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636279 3173 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636283 3173 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636287 3173 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636291 3173 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636301 3173 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636305 3173 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636309 3173 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:25:39.642959 master-0 kubenswrapper[3173]: I1203 14:25:39.636313 3173 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636317 3173 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636324 3173 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636328 3173 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636332 3173 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636336 3173 flags.go:64] FLAG: --port="10250" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636340 3173 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636344 3173 flags.go:64] FLAG: --provider-id="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636348 3173 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636352 3173 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636356 3173 flags.go:64] FLAG: --register-node="true" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636360 3173 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636364 3173 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636371 3173 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636375 3173 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636379 3173 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636382 3173 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636388 3173 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636392 3173 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636397 3173 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636400 3173 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636404 3173 flags.go:64] FLAG: --runonce="false" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636408 3173 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636412 3173 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636416 3173 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:25:39.644771 master-0 kubenswrapper[3173]: I1203 14:25:39.636420 3173 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636424 3173 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636429 3173 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636432 3173 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636438 3173 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636442 3173 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636446 3173 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636450 3173 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636454 3173 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636458 3173 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636462 3173 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636466 3173 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636472 3173 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636476 3173 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636480 3173 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636485 3173 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636489 3173 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636493 3173 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636497 3173 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636501 3173 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636505 3173 flags.go:64] FLAG: --v="2" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636510 3173 flags.go:64] FLAG: --version="false" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636515 3173 flags.go:64] FLAG: --vmodule="" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636520 3173 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: I1203 14:25:39.636524 3173 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:25:39.645794 master-0 kubenswrapper[3173]: W1203 14:25:39.636621 3173 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636626 3173 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636630 3173 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636634 3173 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636638 3173 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636642 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636646 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636650 3173 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636654 3173 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636658 3173 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636661 3173 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636667 3173 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636672 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636676 3173 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636680 3173 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636684 3173 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636688 3173 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636691 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636695 3173 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636699 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:25:39.646447 master-0 kubenswrapper[3173]: W1203 14:25:39.636702 3173 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636706 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636709 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636712 3173 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636716 3173 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636720 3173 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636724 3173 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636727 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636731 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636734 3173 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636738 3173 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636741 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636747 3173 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636750 3173 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636753 3173 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636757 3173 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636760 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636764 3173 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636767 3173 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:25:39.647372 master-0 kubenswrapper[3173]: W1203 14:25:39.636772 3173 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636776 3173 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636780 3173 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636783 3173 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636788 3173 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636792 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636796 3173 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636799 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636802 3173 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636806 3173 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636809 3173 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636813 3173 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636817 3173 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636821 3173 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636825 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636829 3173 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636833 3173 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636836 3173 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636840 3173 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636843 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:25:39.647866 master-0 kubenswrapper[3173]: W1203 14:25:39.636846 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636851 3173 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636854 3173 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636858 3173 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636861 3173 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636866 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636870 3173 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636874 3173 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636877 3173 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636882 3173 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636886 3173 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636889 3173 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: W1203 14:25:39.636894 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: I1203 14:25:39.637041 3173 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: I1203 14:25:39.646417 3173 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:25:39.648432 master-0 kubenswrapper[3173]: I1203 14:25:39.646465 3173 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:25:39.649453 master-0 kubenswrapper[3173]: W1203 14:25:39.649410 3173 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:25:39.649453 master-0 kubenswrapper[3173]: W1203 14:25:39.649439 3173 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:25:39.649453 master-0 kubenswrapper[3173]: W1203 14:25:39.649446 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:25:39.649453 master-0 kubenswrapper[3173]: W1203 14:25:39.649451 3173 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:25:39.649453 master-0 kubenswrapper[3173]: W1203 14:25:39.649457 3173 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649464 3173 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649470 3173 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649493 3173 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649501 3173 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649507 3173 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649514 3173 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649523 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649529 3173 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649534 3173 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649541 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649547 3173 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649553 3173 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649560 3173 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649566 3173 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649572 3173 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649579 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649585 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649591 3173 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:25:39.649646 master-0 kubenswrapper[3173]: W1203 14:25:39.649596 3173 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649601 3173 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649607 3173 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649613 3173 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649618 3173 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649623 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649628 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649634 3173 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649639 3173 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649645 3173 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649651 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649656 3173 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649662 3173 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649668 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649674 3173 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649679 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649684 3173 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649689 3173 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649695 3173 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649700 3173 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:25:39.650156 master-0 kubenswrapper[3173]: W1203 14:25:39.649705 3173 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649710 3173 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649715 3173 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649721 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649728 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649735 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649742 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649749 3173 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649756 3173 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649762 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649769 3173 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649776 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649782 3173 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649788 3173 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649796 3173 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649802 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649808 3173 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649816 3173 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649822 3173 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:25:39.650744 master-0 kubenswrapper[3173]: W1203 14:25:39.649828 3173 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649834 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649840 3173 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649845 3173 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649851 3173 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649856 3173 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649861 3173 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649867 3173 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649872 3173 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.649877 3173 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: I1203 14:25:39.649887 3173 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.650057 3173 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.650068 3173 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.650075 3173 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.650081 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:25:39.651291 master-0 kubenswrapper[3173]: W1203 14:25:39.650087 3173 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650092 3173 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650098 3173 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650103 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650108 3173 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650113 3173 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650118 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650123 3173 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650128 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650134 3173 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650140 3173 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650145 3173 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650150 3173 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650157 3173 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650163 3173 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650169 3173 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650175 3173 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650180 3173 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650185 3173 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650191 3173 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:25:39.651713 master-0 kubenswrapper[3173]: W1203 14:25:39.650196 3173 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650201 3173 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650206 3173 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650211 3173 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650216 3173 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650222 3173 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650229 3173 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650234 3173 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650239 3173 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650244 3173 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650250 3173 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650254 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650260 3173 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650265 3173 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650270 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650275 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650282 3173 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650289 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650294 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:25:39.652257 master-0 kubenswrapper[3173]: W1203 14:25:39.650300 3173 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650305 3173 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650332 3173 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650339 3173 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650345 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650350 3173 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650357 3173 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650363 3173 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650369 3173 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650375 3173 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650381 3173 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650386 3173 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650391 3173 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650396 3173 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650402 3173 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650407 3173 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650413 3173 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650420 3173 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650426 3173 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:25:39.652988 master-0 kubenswrapper[3173]: W1203 14:25:39.650432 3173 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650438 3173 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650443 3173 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650448 3173 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650453 3173 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650458 3173 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650464 3173 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650469 3173 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650474 3173 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: W1203 14:25:39.650479 3173 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: I1203 14:25:39.650488 3173 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:25:39.653520 master-0 kubenswrapper[3173]: I1203 14:25:39.650944 3173 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:25:39.654177 master-0 kubenswrapper[3173]: I1203 14:25:39.654148 3173 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:25:39.654354 master-0 kubenswrapper[3173]: I1203 14:25:39.654302 3173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:25:39.655129 master-0 kubenswrapper[3173]: I1203 14:25:39.655054 3173 server.go:997] "Starting client certificate rotation" Dec 03 14:25:39.655129 master-0 kubenswrapper[3173]: I1203 14:25:39.655090 3173 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:25:39.655468 master-0 kubenswrapper[3173]: I1203 14:25:39.655358 3173 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 08:35:48.407750039 +0000 UTC Dec 03 14:25:39.655468 master-0 kubenswrapper[3173]: I1203 14:25:39.655456 3173 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h10m8.752296919s for next certificate rotation Dec 03 14:25:39.661372 master-0 kubenswrapper[3173]: I1203 14:25:39.661337 3173 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:25:39.667660 master-0 kubenswrapper[3173]: I1203 14:25:39.667615 3173 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:25:39.676657 master-0 kubenswrapper[3173]: I1203 14:25:39.676611 3173 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:25:39.794469 master-0 kubenswrapper[3173]: I1203 14:25:39.794322 3173 log.go:25] "Validated CRI v1 image API" Dec 03 14:25:39.796231 master-0 kubenswrapper[3173]: I1203 14:25:39.796195 3173 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:25:39.799408 master-0 kubenswrapper[3173]: I1203 14:25:39.799368 3173 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:25:39.799408 master-0 kubenswrapper[3173]: I1203 14:25:39.799395 3173 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Dec 03 14:25:39.814073 master-0 kubenswrapper[3173]: I1203 14:25:39.813754 3173 manager.go:217] Machine: {Timestamp:2025-12-03 14:25:39.812873584 +0000 UTC m=+0.294250986 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5a54df78-64a7-4b65-a168-d6e871bf4ce7 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:5a:0b:7b:ac:d8:e6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:25:39.814335 master-0 kubenswrapper[3173]: I1203 14:25:39.814100 3173 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:25:39.814335 master-0 kubenswrapper[3173]: I1203 14:25:39.814238 3173 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:25:39.814616 master-0 kubenswrapper[3173]: I1203 14:25:39.814592 3173 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:25:39.814782 master-0 kubenswrapper[3173]: I1203 14:25:39.814740 3173 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:25:39.814960 master-0 kubenswrapper[3173]: I1203 14:25:39.814770 3173 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:25:39.815030 master-0 kubenswrapper[3173]: I1203 14:25:39.814981 3173 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:25:39.815030 master-0 kubenswrapper[3173]: I1203 14:25:39.814991 3173 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:25:39.815218 master-0 kubenswrapper[3173]: I1203 14:25:39.815189 3173 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:25:39.815275 master-0 kubenswrapper[3173]: I1203 14:25:39.815226 3173 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:25:39.815545 master-0 kubenswrapper[3173]: I1203 14:25:39.815517 3173 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:25:39.815790 master-0 kubenswrapper[3173]: I1203 14:25:39.815612 3173 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:25:39.816406 master-0 kubenswrapper[3173]: I1203 14:25:39.816378 3173 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:25:39.816406 master-0 kubenswrapper[3173]: I1203 14:25:39.816397 3173 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:25:39.816507 master-0 kubenswrapper[3173]: I1203 14:25:39.816432 3173 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:25:39.816507 master-0 kubenswrapper[3173]: I1203 14:25:39.816447 3173 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:25:39.816507 master-0 kubenswrapper[3173]: I1203 14:25:39.816472 3173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:25:39.818060 master-0 kubenswrapper[3173]: I1203 14:25:39.818037 3173 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:25:39.818460 master-0 kubenswrapper[3173]: I1203 14:25:39.818433 3173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:25:39.821510 master-0 kubenswrapper[3173]: I1203 14:25:39.821469 3173 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:25:39.821844 master-0 kubenswrapper[3173]: I1203 14:25:39.821819 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:25:39.821897 master-0 kubenswrapper[3173]: I1203 14:25:39.821849 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:25:39.821897 master-0 kubenswrapper[3173]: I1203 14:25:39.821862 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:25:39.821897 master-0 kubenswrapper[3173]: I1203 14:25:39.821875 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:25:39.821897 master-0 kubenswrapper[3173]: I1203 14:25:39.821888 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:25:39.821897 master-0 kubenswrapper[3173]: I1203 14:25:39.821900 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.821913 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.821924 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.821938 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.821954 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.821991 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:25:39.822123 master-0 kubenswrapper[3173]: I1203 14:25:39.822019 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:25:39.822379 master-0 kubenswrapper[3173]: I1203 14:25:39.822357 3173 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:25:39.822941 master-0 kubenswrapper[3173]: I1203 14:25:39.822920 3173 server.go:1280] "Started kubelet" Dec 03 14:25:39.823215 master-0 kubenswrapper[3173]: I1203 14:25:39.823108 3173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:25:39.823653 master-0 kubenswrapper[3173]: I1203 14:25:39.823566 3173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:25:39.824024 master-0 kubenswrapper[3173]: W1203 14:25:39.823617 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:39.824024 master-0 kubenswrapper[3173]: I1203 14:25:39.823700 3173 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:25:39.824024 master-0 kubenswrapper[3173]: I1203 14:25:39.823713 3173 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:39.824024 master-0 kubenswrapper[3173]: E1203 14:25:39.823749 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:39.824640 master-0 kubenswrapper[3173]: I1203 14:25:39.824129 3173 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:25:39.824640 master-0 kubenswrapper[3173]: W1203 14:25:39.824394 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:39.824640 master-0 kubenswrapper[3173]: E1203 14:25:39.824452 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:39.824378 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:25:39.824997 master-0 kubenswrapper[3173]: E1203 14:25:39.824504 3173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.187dbabaa6b85958 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:25:39.822885208 +0000 UTC m=+0.304262590,LastTimestamp:2025-12-03 14:25:39.822885208 +0000 UTC m=+0.304262590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:25:39.825684 master-0 kubenswrapper[3173]: I1203 14:25:39.825648 3173 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:25:39.825684 master-0 kubenswrapper[3173]: I1203 14:25:39.825675 3173 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825719 3173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825779 3173 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 07:41:09.279091141 +0000 UTC Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825833 3173 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h15m29.45326145s for next certificate rotation Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825820 3173 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825860 3173 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:25:39.825967 master-0 kubenswrapper[3173]: I1203 14:25:39.825843 3173 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:25:39.827000 master-0 kubenswrapper[3173]: E1203 14:25:39.825882 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:39.827000 master-0 kubenswrapper[3173]: W1203 14:25:39.826771 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:39.827000 master-0 kubenswrapper[3173]: E1203 14:25:39.826837 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:39.827309 master-0 kubenswrapper[3173]: E1203 14:25:39.826816 3173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 14:25:39.827893 master-0 kubenswrapper[3173]: I1203 14:25:39.827507 3173 factory.go:55] Registering systemd factory Dec 03 14:25:39.827893 master-0 kubenswrapper[3173]: I1203 14:25:39.827552 3173 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:25:39.827893 master-0 kubenswrapper[3173]: I1203 14:25:39.827843 3173 factory.go:153] Registering CRI-O factory Dec 03 14:25:39.827893 master-0 kubenswrapper[3173]: I1203 14:25:39.827870 3173 factory.go:221] Registration of the crio container factory successfully Dec 03 14:25:39.828606 master-0 kubenswrapper[3173]: I1203 14:25:39.827938 3173 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:25:39.828606 master-0 kubenswrapper[3173]: I1203 14:25:39.827968 3173 factory.go:103] Registering Raw factory Dec 03 14:25:39.828606 master-0 kubenswrapper[3173]: I1203 14:25:39.827992 3173 manager.go:1196] Started watching for new ooms in manager Dec 03 14:25:39.828785 master-0 kubenswrapper[3173]: I1203 14:25:39.828643 3173 manager.go:319] Starting recovery of all containers Dec 03 14:25:39.846085 master-0 kubenswrapper[3173]: I1203 14:25:39.846053 3173 manager.go:324] Recovery completed Dec 03 14:25:39.856706 master-0 kubenswrapper[3173]: I1203 14:25:39.856666 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:39.858313 master-0 kubenswrapper[3173]: I1203 14:25:39.858259 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:39.858578 master-0 kubenswrapper[3173]: I1203 14:25:39.858330 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:39.858578 master-0 kubenswrapper[3173]: I1203 14:25:39.858343 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:39.859310 master-0 kubenswrapper[3173]: I1203 14:25:39.859269 3173 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 14:25:39.859310 master-0 kubenswrapper[3173]: I1203 14:25:39.859296 3173 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 14:25:39.859425 master-0 kubenswrapper[3173]: I1203 14:25:39.859402 3173 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890097 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890194 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890207 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890218 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890227 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890236 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890246 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890254 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890265 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b02244d0-f4ef-4702-950d-9e3fb5ced128" volumeName="kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890276 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890287 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache" seLinuxMountContext="" Dec 03 14:25:39.890255 master-0 kubenswrapper[3173]: I1203 14:25:39.890298 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890314 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890325 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890334 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890344 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890355 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890367 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890379 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890389 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890400 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890410 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890421 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890432 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890442 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890452 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890466 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890476 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890487 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890497 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890507 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890544 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.890959 master-0 kubenswrapper[3173]: I1203 14:25:39.890556 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890587 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890598 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890608 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890618 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890628 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890638 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890649 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890660 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890670 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890680 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890690 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890702 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890711 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42c95e54-b4ba-4b19-a97c-abcec840ac5d" volumeName="kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890722 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38888547-ed48-4f96-810d-bcd04e49bd6b" volumeName="kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890731 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890741 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890752 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890762 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890772 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890787 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls" seLinuxMountContext="" Dec 03 14:25:39.891820 master-0 kubenswrapper[3173]: I1203 14:25:39.890797 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890808 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890818 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890833 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890842 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890853 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890862 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890872 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890881 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890891 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890901 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890910 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890922 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890932 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890975 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890987 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.890997 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.891023 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.891034 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.891044 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.891055 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5" seLinuxMountContext="" Dec 03 14:25:39.893078 master-0 kubenswrapper[3173]: I1203 14:25:39.891065 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891075 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891085 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891095 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891106 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891115 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891126 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891144 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891156 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891166 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891176 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891185 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891196 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891235 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891251 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891261 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891275 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891287 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891298 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891309 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891319 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config" seLinuxMountContext="" Dec 03 14:25:39.893854 master-0 kubenswrapper[3173]: I1203 14:25:39.891340 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" volumeName="kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891351 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891361 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891371 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891381 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891391 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891401 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891411 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891421 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891435 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891449 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891462 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891475 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891488 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891500 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891513 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891528 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891545 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891561 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891579 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891592 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.895319 master-0 kubenswrapper[3173]: I1203 14:25:39.891604 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891614 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891626 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891638 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891651 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1b3ab29-77cf-48ac-8881-846c46bb9048" volumeName="kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891662 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891673 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891684 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891695 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891706 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891717 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" volumeName="kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891728 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891738 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891751 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891763 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891774 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891785 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891798 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891809 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891821 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891833 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q" seLinuxMountContext="" Dec 03 14:25:39.896405 master-0 kubenswrapper[3173]: I1203 14:25:39.891844 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891856 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891867 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891880 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891891 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891902 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891914 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891924 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891936 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891946 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891957 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891969 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891980 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.891992 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892017 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892031 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892042 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892053 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892064 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892077 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892088 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.897895 master-0 kubenswrapper[3173]: I1203 14:25:39.892098 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892110 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892122 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892134 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892147 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892160 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892171 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892181 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892192 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892203 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892214 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892225 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892235 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892246 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892257 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892269 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892280 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892290 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892303 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892318 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892332 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp" seLinuxMountContext="" Dec 03 14:25:39.899031 master-0 kubenswrapper[3173]: I1203 14:25:39.892347 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892362 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892376 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892387 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892399 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892419 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892434 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892447 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892458 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892470 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892480 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892493 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892508 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" volumeName="kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892522 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892536 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892550 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892564 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892577 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892592 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892607 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892620 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.900126 master-0 kubenswrapper[3173]: I1203 14:25:39.892634 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892649 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892660 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892674 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892690 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892705 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892720 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892751 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f723d97-5c65-4ae7-9085-26db8b4f2f52" volumeName="kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892775 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892793 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892806 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892817 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892835 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892847 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892863 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892875 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892889 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892900 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892912 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892923 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892936 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8" seLinuxMountContext="" Dec 03 14:25:39.900916 master-0 kubenswrapper[3173]: I1203 14:25:39.892947 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.892958 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.892968 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.892979 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.892989 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893015 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" volumeName="kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893029 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893039 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893049 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893060 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893070 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893080 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893090 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893101 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893111 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38888547-ed48-4f96-810d-bcd04e49bd6b" volumeName="kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893122 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893132 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893143 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893154 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893165 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893177 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca" seLinuxMountContext="" Dec 03 14:25:39.901861 master-0 kubenswrapper[3173]: I1203 14:25:39.893187 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0a2889-39a5-471e-bd46-958e2f8eacaa" volumeName="kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893197 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893207 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893216 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893226 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893236 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893248 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893258 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893268 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893279 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893290 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893300 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893310 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893320 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893330 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893340 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893350 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1b3ab29-77cf-48ac-8881-846c46bb9048" volumeName="kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893363 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893374 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893384 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893393 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.902960 master-0 kubenswrapper[3173]: I1203 14:25:39.893404 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893414 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893424 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893434 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893445 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893467 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893478 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893489 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893499 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893508 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893519 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893528 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893538 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893549 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893559 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893570 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893580 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893590 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893600 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893611 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893621 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca" seLinuxMountContext="" Dec 03 14:25:39.903859 master-0 kubenswrapper[3173]: I1203 14:25:39.893633 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893643 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893654 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893663 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893673 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893683 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893693 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893707 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893717 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893726 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893735 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893744 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893752 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893763 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893771 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893780 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893790 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893798 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893808 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893816 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893825 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.904722 master-0 kubenswrapper[3173]: I1203 14:25:39.893834 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893842 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893851 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893862 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893871 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893882 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893892 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893902 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893911 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893921 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893930 3173 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893945 3173 reconstruct.go:97] "Volume reconstruction finished" Dec 03 14:25:39.905852 master-0 kubenswrapper[3173]: I1203 14:25:39.893954 3173 reconciler.go:26] "Reconciler: start to sync state" Dec 03 14:25:39.929067 master-0 kubenswrapper[3173]: E1203 14:25:39.928971 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:39.935763 master-0 kubenswrapper[3173]: I1203 14:25:39.935714 3173 policy_none.go:49] "None policy: Start" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.936753 3173 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.936786 3173 state_mem.go:35] "Initializing new in-memory state store" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.994498 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.996184 3173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.996253 3173 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: I1203 14:25:39.996281 3173 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:39.996420 3173 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: W1203 14:25:39.996900 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:39.996950 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.028686 3173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.029089 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.096572 3173 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.129420 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.229535 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.296667 3173 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 14:25:40.428731 master-0 kubenswrapper[3173]: E1203 14:25:40.330177 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:40.431747 master-0 kubenswrapper[3173]: E1203 14:25:40.429574 3173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 14:25:40.431747 master-0 kubenswrapper[3173]: E1203 14:25:40.430835 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:25:40.491819 master-0 kubenswrapper[3173]: I1203 14:25:40.491771 3173 manager.go:334] "Starting Device Plugin manager" Dec 03 14:25:40.491981 master-0 kubenswrapper[3173]: I1203 14:25:40.491834 3173 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 14:25:40.491981 master-0 kubenswrapper[3173]: I1203 14:25:40.491856 3173 server.go:79] "Starting device plugin registration server" Dec 03 14:25:40.492354 master-0 kubenswrapper[3173]: I1203 14:25:40.492314 3173 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 14:25:40.492450 master-0 kubenswrapper[3173]: I1203 14:25:40.492342 3173 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 14:25:40.492595 master-0 kubenswrapper[3173]: I1203 14:25:40.492542 3173 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 14:25:40.492729 master-0 kubenswrapper[3173]: I1203 14:25:40.492695 3173 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 14:25:40.492729 master-0 kubenswrapper[3173]: I1203 14:25:40.492712 3173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 14:25:40.500865 master-0 kubenswrapper[3173]: E1203 14:25:40.500824 3173 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:25:40.592596 master-0 kubenswrapper[3173]: I1203 14:25:40.592518 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.593964 master-0 kubenswrapper[3173]: I1203 14:25:40.593934 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.593964 master-0 kubenswrapper[3173]: I1203 14:25:40.593966 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.594115 master-0 kubenswrapper[3173]: I1203 14:25:40.593975 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.594115 master-0 kubenswrapper[3173]: I1203 14:25:40.593997 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:40.595019 master-0 kubenswrapper[3173]: E1203 14:25:40.594943 3173 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:25:40.697309 master-0 kubenswrapper[3173]: I1203 14:25:40.697109 3173 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:25:40.697309 master-0 kubenswrapper[3173]: I1203 14:25:40.697273 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.698507 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.698554 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.698570 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.698727 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699141 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699201 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699742 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699758 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699766 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.699843 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700123 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700177 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700194 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700203 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700213 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700459 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700485 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700501 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700605 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700725 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.700759 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701044 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701075 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701087 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701375 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701403 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701403 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701432 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701411 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701446 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701583 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701747 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.701783 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702248 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702274 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702286 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702378 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702399 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702407 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702463 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.702499 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.703180 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.703210 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.722064 master-0 kubenswrapper[3173]: I1203 14:25:40.703220 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.796061 master-0 kubenswrapper[3173]: I1203 14:25:40.796000 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:40.797029 master-0 kubenswrapper[3173]: I1203 14:25:40.796974 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:40.797029 master-0 kubenswrapper[3173]: I1203 14:25:40.797001 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:40.797029 master-0 kubenswrapper[3173]: I1203 14:25:40.797023 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:40.797200 master-0 kubenswrapper[3173]: I1203 14:25:40.797045 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:40.797787 master-0 kubenswrapper[3173]: E1203 14:25:40.797746 3173 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:25:40.806024 master-0 kubenswrapper[3173]: I1203 14:25:40.805968 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.806024 master-0 kubenswrapper[3173]: I1203 14:25:40.806019 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.806177 master-0 kubenswrapper[3173]: I1203 14:25:40.806046 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.806177 master-0 kubenswrapper[3173]: I1203 14:25:40.806069 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.806177 master-0 kubenswrapper[3173]: I1203 14:25:40.806092 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.806177 master-0 kubenswrapper[3173]: I1203 14:25:40.806114 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.806355 master-0 kubenswrapper[3173]: I1203 14:25:40.806187 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806355 master-0 kubenswrapper[3173]: I1203 14:25:40.806253 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.806355 master-0 kubenswrapper[3173]: I1203 14:25:40.806292 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806355 master-0 kubenswrapper[3173]: I1203 14:25:40.806321 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806742 master-0 kubenswrapper[3173]: I1203 14:25:40.806381 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806742 master-0 kubenswrapper[3173]: I1203 14:25:40.806441 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.806742 master-0 kubenswrapper[3173]: I1203 14:25:40.806482 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806742 master-0 kubenswrapper[3173]: I1203 14:25:40.806509 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.806742 master-0 kubenswrapper[3173]: I1203 14:25:40.806534 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.825532 master-0 kubenswrapper[3173]: I1203 14:25:40.825480 3173 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:40.908124 master-0 kubenswrapper[3173]: I1203 14:25:40.908049 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.908124 master-0 kubenswrapper[3173]: I1203 14:25:40.908114 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908124 master-0 kubenswrapper[3173]: I1203 14:25:40.908137 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908157 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908180 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908200 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908219 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908240 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908250 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908278 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908312 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908327 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908330 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908368 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908405 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908442 master-0 kubenswrapper[3173]: I1203 14:25:40.908279 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908453 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908372 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908496 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908503 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908503 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908418 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908557 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908593 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908616 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908658 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908697 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908746 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908757 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.908932 master-0 kubenswrapper[3173]: I1203 14:25:40.908814 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:25:40.911256 master-0 kubenswrapper[3173]: W1203 14:25:40.911184 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:40.911256 master-0 kubenswrapper[3173]: E1203 14:25:40.911246 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:41.027190 master-0 kubenswrapper[3173]: I1203 14:25:41.027050 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:41.037199 master-0 kubenswrapper[3173]: W1203 14:25:41.037085 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:41.037199 master-0 kubenswrapper[3173]: E1203 14:25:41.037164 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:41.045518 master-0 kubenswrapper[3173]: W1203 14:25:41.045457 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1dbec7c25a38180c3a6691040eb5a8.slice/crio-aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095 WatchSource:0}: Error finding container aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095: Status 404 returned error can't find the container with id aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095 Dec 03 14:25:41.048584 master-0 kubenswrapper[3173]: I1203 14:25:41.048506 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:41.055162 master-0 kubenswrapper[3173]: I1203 14:25:41.055136 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:25:41.057260 master-0 kubenswrapper[3173]: W1203 14:25:41.057231 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2fa610bb2a39c39fcdd00db03a511a.slice/crio-40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54 WatchSource:0}: Error finding container 40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54: Status 404 returned error can't find the container with id 40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54 Dec 03 14:25:41.075606 master-0 kubenswrapper[3173]: W1203 14:25:41.075556 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb495b0c38f2c54e7cc46282c5f92aab5.slice/crio-8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c WatchSource:0}: Error finding container 8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c: Status 404 returned error can't find the container with id 8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c Dec 03 14:25:41.087715 master-0 kubenswrapper[3173]: I1203 14:25:41.087672 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:41.093554 master-0 kubenswrapper[3173]: I1203 14:25:41.093524 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:41.099737 master-0 kubenswrapper[3173]: W1203 14:25:41.099680 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dd8b778e190b1975a0a8fad534da6dd.slice/crio-fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516 WatchSource:0}: Error finding container fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516: Status 404 returned error can't find the container with id fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516 Dec 03 14:25:41.146047 master-0 kubenswrapper[3173]: W1203 14:25:41.145953 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:41.146291 master-0 kubenswrapper[3173]: E1203 14:25:41.146061 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:41.199085 master-0 kubenswrapper[3173]: I1203 14:25:41.198958 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:41.200769 master-0 kubenswrapper[3173]: I1203 14:25:41.200716 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:41.200843 master-0 kubenswrapper[3173]: I1203 14:25:41.200796 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:41.200843 master-0 kubenswrapper[3173]: I1203 14:25:41.200810 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:41.200937 master-0 kubenswrapper[3173]: I1203 14:25:41.200850 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:41.202192 master-0 kubenswrapper[3173]: E1203 14:25:41.202154 3173 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:25:41.207415 master-0 kubenswrapper[3173]: W1203 14:25:41.207331 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:41.207483 master-0 kubenswrapper[3173]: E1203 14:25:41.207435 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:41.230978 master-0 kubenswrapper[3173]: E1203 14:25:41.230912 3173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 14:25:41.825763 master-0 kubenswrapper[3173]: I1203 14:25:41.825719 3173 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:42.002339 master-0 kubenswrapper[3173]: I1203 14:25:42.002293 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.003659 master-0 kubenswrapper[3173]: I1203 14:25:42.003623 3173 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0" exitCode=0 Dec 03 14:25:42.003780 master-0 kubenswrapper[3173]: I1203 14:25:42.003699 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0"} Dec 03 14:25:42.003869 master-0 kubenswrapper[3173]: I1203 14:25:42.003794 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516"} Dec 03 14:25:42.003869 master-0 kubenswrapper[3173]: I1203 14:25:42.003839 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.003964 master-0 kubenswrapper[3173]: I1203 14:25:42.003889 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.004026 master-0 kubenswrapper[3173]: I1203 14:25:42.003979 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.004026 master-0 kubenswrapper[3173]: I1203 14:25:42.003999 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.004127 master-0 kubenswrapper[3173]: I1203 14:25:42.004072 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:42.004734 master-0 kubenswrapper[3173]: I1203 14:25:42.004524 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.004734 master-0 kubenswrapper[3173]: I1203 14:25:42.004566 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.004734 master-0 kubenswrapper[3173]: I1203 14:25:42.004578 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.005345 master-0 kubenswrapper[3173]: E1203 14:25:42.005302 3173 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Dec 03 14:25:42.005704 master-0 kubenswrapper[3173]: I1203 14:25:42.005651 3173 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="a80929b981b600c3956c93ee3dcc81c8d5ba604e4e6c208f8e3ae77bdf73736d" exitCode=0 Dec 03 14:25:42.005831 master-0 kubenswrapper[3173]: I1203 14:25:42.005773 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"a80929b981b600c3956c93ee3dcc81c8d5ba604e4e6c208f8e3ae77bdf73736d"} Dec 03 14:25:42.005911 master-0 kubenswrapper[3173]: I1203 14:25:42.005865 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c"} Dec 03 14:25:42.006033 master-0 kubenswrapper[3173]: I1203 14:25:42.005957 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.009026 master-0 kubenswrapper[3173]: I1203 14:25:42.007712 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.009026 master-0 kubenswrapper[3173]: I1203 14:25:42.007768 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.009026 master-0 kubenswrapper[3173]: I1203 14:25:42.007792 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.009842 master-0 kubenswrapper[3173]: I1203 14:25:42.009786 3173 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="6eae176e0ae8fefc0bdfeffe3c926861eedc7d77177bb0a1c542bb03d7b718af" exitCode=0 Dec 03 14:25:42.009911 master-0 kubenswrapper[3173]: I1203 14:25:42.009868 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"6eae176e0ae8fefc0bdfeffe3c926861eedc7d77177bb0a1c542bb03d7b718af"} Dec 03 14:25:42.010114 master-0 kubenswrapper[3173]: I1203 14:25:42.009979 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54"} Dec 03 14:25:42.010114 master-0 kubenswrapper[3173]: I1203 14:25:42.010067 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.011531 master-0 kubenswrapper[3173]: I1203 14:25:42.011483 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.011531 master-0 kubenswrapper[3173]: I1203 14:25:42.011519 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.011531 master-0 kubenswrapper[3173]: I1203 14:25:42.011530 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.017146 master-0 kubenswrapper[3173]: I1203 14:25:42.016986 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0"} Dec 03 14:25:42.017269 master-0 kubenswrapper[3173]: I1203 14:25:42.017163 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636"} Dec 03 14:25:42.017269 master-0 kubenswrapper[3173]: I1203 14:25:42.017180 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095"} Dec 03 14:25:42.021530 master-0 kubenswrapper[3173]: I1203 14:25:42.021245 3173 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d" exitCode=0 Dec 03 14:25:42.021530 master-0 kubenswrapper[3173]: I1203 14:25:42.021295 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d"} Dec 03 14:25:42.021530 master-0 kubenswrapper[3173]: I1203 14:25:42.021330 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1"} Dec 03 14:25:42.021530 master-0 kubenswrapper[3173]: I1203 14:25:42.021456 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.022513 master-0 kubenswrapper[3173]: I1203 14:25:42.022464 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.022513 master-0 kubenswrapper[3173]: I1203 14:25:42.022508 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.022513 master-0 kubenswrapper[3173]: I1203 14:25:42.022522 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.025270 master-0 kubenswrapper[3173]: I1203 14:25:42.025233 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:42.026990 master-0 kubenswrapper[3173]: I1203 14:25:42.026951 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:42.026990 master-0 kubenswrapper[3173]: I1203 14:25:42.026987 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:42.027109 master-0 kubenswrapper[3173]: I1203 14:25:42.027018 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:42.655097 master-0 kubenswrapper[3173]: W1203 14:25:42.654942 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Dec 03 14:25:42.655097 master-0 kubenswrapper[3173]: E1203 14:25:42.655057 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Dec 03 14:25:43.025456 master-0 kubenswrapper[3173]: I1203 14:25:43.025398 3173 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7" exitCode=0 Dec 03 14:25:43.025951 master-0 kubenswrapper[3173]: I1203 14:25:43.025459 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7"} Dec 03 14:25:43.025951 master-0 kubenswrapper[3173]: I1203 14:25:43.025585 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.026391 master-0 kubenswrapper[3173]: I1203 14:25:43.026368 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.026391 master-0 kubenswrapper[3173]: I1203 14:25:43.026393 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.026391 master-0 kubenswrapper[3173]: I1203 14:25:43.026401 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.027579 master-0 kubenswrapper[3173]: I1203 14:25:43.027550 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.027643 master-0 kubenswrapper[3173]: I1203 14:25:43.027576 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"cd35b5f869b9c5557094931b96d302011205f125ccc16f0741d5cad5f6b8c72d"} Dec 03 14:25:43.028126 master-0 kubenswrapper[3173]: I1203 14:25:43.028097 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.028126 master-0 kubenswrapper[3173]: I1203 14:25:43.028126 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.028270 master-0 kubenswrapper[3173]: I1203 14:25:43.028138 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.031663 master-0 kubenswrapper[3173]: I1203 14:25:43.031317 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"05a2610f6bca4defc9b7ede8255a1c063ebe53f7d07ab7227fcf2edbc056b331"} Dec 03 14:25:43.031663 master-0 kubenswrapper[3173]: I1203 14:25:43.031343 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"2d61d8802bbc570d04dd9977fb07dd6294b8212bfe0e7176af3f6ce67f85ee5a"} Dec 03 14:25:43.031663 master-0 kubenswrapper[3173]: I1203 14:25:43.031353 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"d0a827a444c38d75c515a416cb1a917a642fb70a7523efb53087345e0bb3e2ee"} Dec 03 14:25:43.031663 master-0 kubenswrapper[3173]: I1203 14:25:43.031357 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.031893 master-0 kubenswrapper[3173]: I1203 14:25:43.031882 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.031927 master-0 kubenswrapper[3173]: I1203 14:25:43.031897 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.031927 master-0 kubenswrapper[3173]: I1203 14:25:43.031906 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034031 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034020 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860"} Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034101 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46"} Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034635 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034666 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.034940 master-0 kubenswrapper[3173]: I1203 14:25:43.034678 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.037731 master-0 kubenswrapper[3173]: I1203 14:25:43.037693 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214"} Dec 03 14:25:43.037731 master-0 kubenswrapper[3173]: I1203 14:25:43.037730 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde"} Dec 03 14:25:43.037844 master-0 kubenswrapper[3173]: I1203 14:25:43.037744 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438"} Dec 03 14:25:43.037844 master-0 kubenswrapper[3173]: I1203 14:25:43.037758 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7"} Dec 03 14:25:43.037844 master-0 kubenswrapper[3173]: I1203 14:25:43.037820 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c"} Dec 03 14:25:43.038027 master-0 kubenswrapper[3173]: I1203 14:25:43.037909 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.038561 master-0 kubenswrapper[3173]: I1203 14:25:43.038539 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.038619 master-0 kubenswrapper[3173]: I1203 14:25:43.038570 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.038619 master-0 kubenswrapper[3173]: I1203 14:25:43.038583 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.606485 master-0 kubenswrapper[3173]: I1203 14:25:43.606375 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:43.608116 master-0 kubenswrapper[3173]: I1203 14:25:43.608081 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:43.608216 master-0 kubenswrapper[3173]: I1203 14:25:43.608126 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:43.608216 master-0 kubenswrapper[3173]: I1203 14:25:43.608138 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:43.608216 master-0 kubenswrapper[3173]: I1203 14:25:43.608164 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:44.043362 master-0 kubenswrapper[3173]: I1203 14:25:44.043278 3173 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb" exitCode=0 Dec 03 14:25:44.043879 master-0 kubenswrapper[3173]: I1203 14:25:44.043473 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:25:44.043879 master-0 kubenswrapper[3173]: I1203 14:25:44.043489 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:44.043879 master-0 kubenswrapper[3173]: I1203 14:25:44.043528 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:44.044422 master-0 kubenswrapper[3173]: I1203 14:25:44.044022 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb"} Dec 03 14:25:44.044422 master-0 kubenswrapper[3173]: I1203 14:25:44.044136 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:44.044422 master-0 kubenswrapper[3173]: I1203 14:25:44.044198 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.044817 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.044837 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.044845 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.045077 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.045110 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:44.045167 master-0 kubenswrapper[3173]: I1203 14:25:44.045126 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045210 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045260 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045281 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045500 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045542 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:44.045707 master-0 kubenswrapper[3173]: I1203 14:25:44.045565 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:44.586998 master-0 kubenswrapper[3173]: I1203 14:25:44.586939 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:25:45.050833 master-0 kubenswrapper[3173]: I1203 14:25:45.050743 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"e40eeccb22154afc36511e259a0bbd0340bbb8c152ccc392f07b9b63e9286432"} Dec 03 14:25:45.050833 master-0 kubenswrapper[3173]: I1203 14:25:45.050815 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"be55425be92502579ba54e0a7029374fa5869946a681a8d47fee9f3e2abb52ad"} Dec 03 14:25:45.050833 master-0 kubenswrapper[3173]: I1203 14:25:45.050830 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"d7424a3adff7dce95e229689db3a097554825a0a1b6fc1da3f511760d76ff1a4"} Dec 03 14:25:45.050833 master-0 kubenswrapper[3173]: I1203 14:25:45.050839 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"2eb83f75a316413d7cd4039c1ecf1652c36407775bf11a763ce99c299576a480"} Dec 03 14:25:45.050833 master-0 kubenswrapper[3173]: I1203 14:25:45.050846 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:45.051632 master-0 kubenswrapper[3173]: I1203 14:25:45.051604 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:45.051677 master-0 kubenswrapper[3173]: I1203 14:25:45.051644 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:45.051677 master-0 kubenswrapper[3173]: I1203 14:25:45.051653 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:45.461118 master-0 kubenswrapper[3173]: I1203 14:25:45.460972 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:45.461454 master-0 kubenswrapper[3173]: I1203 14:25:45.461415 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:45.463260 master-0 kubenswrapper[3173]: I1203 14:25:45.463220 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:45.463320 master-0 kubenswrapper[3173]: I1203 14:25:45.463276 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:45.463320 master-0 kubenswrapper[3173]: I1203 14:25:45.463288 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:45.467707 master-0 kubenswrapper[3173]: I1203 14:25:45.467678 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:46.059466 master-0 kubenswrapper[3173]: I1203 14:25:46.059357 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:46.060528 master-0 kubenswrapper[3173]: I1203 14:25:46.060486 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:46.060791 master-0 kubenswrapper[3173]: I1203 14:25:46.060717 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"273350e7b0aeceae0168f90588eb07e0ee52a413f6434e0abfb74158cc482c9d"} Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061203 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061257 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061277 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061491 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061522 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:46.061551 master-0 kubenswrapper[3173]: I1203 14:25:46.061538 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:46.684992 master-0 kubenswrapper[3173]: I1203 14:25:46.684874 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:46.930176 master-0 kubenswrapper[3173]: I1203 14:25:46.929932 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:46.930176 master-0 kubenswrapper[3173]: I1203 14:25:46.930242 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:25:46.930696 master-0 kubenswrapper[3173]: I1203 14:25:46.930301 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:46.931746 master-0 kubenswrapper[3173]: I1203 14:25:46.931672 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:46.931746 master-0 kubenswrapper[3173]: I1203 14:25:46.931728 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:46.931746 master-0 kubenswrapper[3173]: I1203 14:25:46.931746 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:47.063203 master-0 kubenswrapper[3173]: I1203 14:25:47.062988 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:47.064245 master-0 kubenswrapper[3173]: I1203 14:25:47.064219 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:47.064364 master-0 kubenswrapper[3173]: I1203 14:25:47.064272 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:47.064364 master-0 kubenswrapper[3173]: I1203 14:25:47.064292 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:47.533646 master-0 kubenswrapper[3173]: I1203 14:25:47.533416 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:47.533889 master-0 kubenswrapper[3173]: I1203 14:25:47.533690 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:25:47.533889 master-0 kubenswrapper[3173]: I1203 14:25:47.533743 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:47.535058 master-0 kubenswrapper[3173]: I1203 14:25:47.534992 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:47.535195 master-0 kubenswrapper[3173]: I1203 14:25:47.535079 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:47.535195 master-0 kubenswrapper[3173]: I1203 14:25:47.535097 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:48.065489 master-0 kubenswrapper[3173]: I1203 14:25:48.065396 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:48.066758 master-0 kubenswrapper[3173]: I1203 14:25:48.066480 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:48.066758 master-0 kubenswrapper[3173]: I1203 14:25:48.066516 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:48.066758 master-0 kubenswrapper[3173]: I1203 14:25:48.066526 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:48.428939 master-0 kubenswrapper[3173]: I1203 14:25:48.428700 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:48.429327 master-0 kubenswrapper[3173]: I1203 14:25:48.429088 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:48.430419 master-0 kubenswrapper[3173]: I1203 14:25:48.430338 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:48.430419 master-0 kubenswrapper[3173]: I1203 14:25:48.430380 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:48.430419 master-0 kubenswrapper[3173]: I1203 14:25:48.430392 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:49.406657 master-0 kubenswrapper[3173]: I1203 14:25:49.406554 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:49.407749 master-0 kubenswrapper[3173]: I1203 14:25:49.407291 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:49.408228 master-0 kubenswrapper[3173]: I1203 14:25:49.408195 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:49.408363 master-0 kubenswrapper[3173]: I1203 14:25:49.408246 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:49.408363 master-0 kubenswrapper[3173]: I1203 14:25:49.408254 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:49.926469 master-0 kubenswrapper[3173]: I1203 14:25:49.926352 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:49.926803 master-0 kubenswrapper[3173]: I1203 14:25:49.926575 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:49.927762 master-0 kubenswrapper[3173]: I1203 14:25:49.927698 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:49.927875 master-0 kubenswrapper[3173]: I1203 14:25:49.927774 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:49.927875 master-0 kubenswrapper[3173]: I1203 14:25:49.927803 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:50.501121 master-0 kubenswrapper[3173]: E1203 14:25:50.500995 3173 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:25:50.713924 master-0 kubenswrapper[3173]: I1203 14:25:50.713769 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:25:50.714190 master-0 kubenswrapper[3173]: I1203 14:25:50.714134 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:50.715483 master-0 kubenswrapper[3173]: I1203 14:25:50.715445 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:50.715535 master-0 kubenswrapper[3173]: I1203 14:25:50.715500 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:50.715535 master-0 kubenswrapper[3173]: I1203 14:25:50.715516 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:51.762795 master-0 kubenswrapper[3173]: I1203 14:25:51.762685 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:51.764420 master-0 kubenswrapper[3173]: I1203 14:25:51.762909 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:51.764420 master-0 kubenswrapper[3173]: I1203 14:25:51.764165 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:51.764420 master-0 kubenswrapper[3173]: I1203 14:25:51.764226 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:51.764420 master-0 kubenswrapper[3173]: I1203 14:25:51.764237 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:51.768623 master-0 kubenswrapper[3173]: I1203 14:25:51.768566 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:25:52.074443 master-0 kubenswrapper[3173]: I1203 14:25:52.074316 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:52.075122 master-0 kubenswrapper[3173]: I1203 14:25:52.075097 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:52.075197 master-0 kubenswrapper[3173]: I1203 14:25:52.075126 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:52.075197 master-0 kubenswrapper[3173]: I1203 14:25:52.075134 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:52.827335 master-0 kubenswrapper[3173]: I1203 14:25:52.827253 3173 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:25:52.832517 master-0 kubenswrapper[3173]: E1203 14:25:52.832447 3173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 03 14:25:52.926740 master-0 kubenswrapper[3173]: I1203 14:25:52.926663 3173 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:25:52.927000 master-0 kubenswrapper[3173]: I1203 14:25:52.926753 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 03 14:25:53.244154 master-0 kubenswrapper[3173]: W1203 14:25:53.243947 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:25:53.244154 master-0 kubenswrapper[3173]: I1203 14:25:53.244118 3173 trace.go:236] Trace[644704865]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:25:43.242) (total time: 10001ms): Dec 03 14:25:53.244154 master-0 kubenswrapper[3173]: Trace[644704865]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:53.243) Dec 03 14:25:53.244154 master-0 kubenswrapper[3173]: Trace[644704865]: [10.001442474s] [10.001442474s] END Dec 03 14:25:53.244154 master-0 kubenswrapper[3173]: E1203 14:25:53.244148 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 14:25:53.343761 master-0 kubenswrapper[3173]: W1203 14:25:53.343676 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:25:53.343761 master-0 kubenswrapper[3173]: I1203 14:25:53.343771 3173 trace.go:236] Trace[210305640]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:25:43.342) (total time: 10001ms): Dec 03 14:25:53.343761 master-0 kubenswrapper[3173]: Trace[210305640]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:53.343) Dec 03 14:25:53.343761 master-0 kubenswrapper[3173]: Trace[210305640]: [10.00166168s] [10.00166168s] END Dec 03 14:25:53.344176 master-0 kubenswrapper[3173]: E1203 14:25:53.343799 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 14:25:53.610410 master-0 kubenswrapper[3173]: E1203 14:25:53.610231 3173 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="master-0" Dec 03 14:25:53.719381 master-0 kubenswrapper[3173]: E1203 14:25:53.719190 3173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{master-0.187dbabaa6b85958 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:25:39.822885208 +0000 UTC m=+0.304262590,LastTimestamp:2025-12-03 14:25:39.822885208 +0000 UTC m=+0.304262590,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:25:54.014572 master-0 kubenswrapper[3173]: W1203 14:25:54.014427 3173 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 03 14:25:54.014572 master-0 kubenswrapper[3173]: I1203 14:25:54.014514 3173 trace.go:236] Trace[791162738]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:25:44.013) (total time: 10001ms): Dec 03 14:25:54.014572 master-0 kubenswrapper[3173]: Trace[791162738]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:25:54.014) Dec 03 14:25:54.014572 master-0 kubenswrapper[3173]: Trace[791162738]: [10.001463056s] [10.001463056s] END Dec 03 14:25:54.014572 master-0 kubenswrapper[3173]: E1203 14:25:54.014536 3173 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 03 14:25:54.465345 master-0 kubenswrapper[3173]: I1203 14:25:54.465219 3173 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 14:25:54.465345 master-0 kubenswrapper[3173]: I1203 14:25:54.465306 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 14:25:54.469486 master-0 kubenswrapper[3173]: I1203 14:25:54.469422 3173 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 03 14:25:54.469772 master-0 kubenswrapper[3173]: I1203 14:25:54.469507 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 03 14:25:56.709335 master-0 kubenswrapper[3173]: I1203 14:25:56.709095 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:56.709987 master-0 kubenswrapper[3173]: I1203 14:25:56.709470 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:56.710670 master-0 kubenswrapper[3173]: I1203 14:25:56.710635 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:56.710729 master-0 kubenswrapper[3173]: I1203 14:25:56.710672 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:56.710729 master-0 kubenswrapper[3173]: I1203 14:25:56.710684 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:56.722103 master-0 kubenswrapper[3173]: I1203 14:25:56.722053 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 14:25:56.811371 master-0 kubenswrapper[3173]: I1203 14:25:56.811264 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:56.812745 master-0 kubenswrapper[3173]: I1203 14:25:56.812685 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:56.812832 master-0 kubenswrapper[3173]: I1203 14:25:56.812773 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:56.812832 master-0 kubenswrapper[3173]: I1203 14:25:56.812797 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:56.812926 master-0 kubenswrapper[3173]: I1203 14:25:56.812849 3173 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:25:57.086186 master-0 kubenswrapper[3173]: I1203 14:25:57.086138 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:25:57.086931 master-0 kubenswrapper[3173]: I1203 14:25:57.086904 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:25:57.086931 master-0 kubenswrapper[3173]: I1203 14:25:57.086938 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:25:57.086931 master-0 kubenswrapper[3173]: I1203 14:25:57.086956 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: I1203 14:25:57.537644 3173 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]log ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]etcd ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/priority-and-fairness-filter ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-apiextensions-informers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-apiextensions-controllers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/crd-informer-synced ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-system-namespaces-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/rbac/bootstrap-roles ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/bootstrap-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/start-kube-aggregator-informers ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-registration-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]autoregister-completion ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-openapi-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: livez check failed Dec 03 14:25:57.537780 master-0 kubenswrapper[3173]: I1203 14:25:57.537709 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:00.501297 master-0 kubenswrapper[3173]: E1203 14:26:00.501205 3173 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Dec 03 14:26:01.666301 master-0 kubenswrapper[3173]: I1203 14:26:01.666194 3173 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:01.925983 master-0 kubenswrapper[3173]: I1203 14:26:01.925732 3173 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 14:26:01.925983 master-0 kubenswrapper[3173]: I1203 14:26:01.925976 3173 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 14:26:01.925983 master-0 kubenswrapper[3173]: E1203 14:26:01.925995 3173 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Dec 03 14:26:01.930073 master-0 kubenswrapper[3173]: I1203 14:26:01.930014 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:01.930271 master-0 kubenswrapper[3173]: I1203 14:26:01.930057 3173 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:01Z","lastTransitionTime":"2025-12-03T14:26:01Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 03 14:26:01.941053 master-0 kubenswrapper[3173]: E1203 14:26:01.940956 3173 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:01.946781 master-0 kubenswrapper[3173]: I1203 14:26:01.946727 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:01.946949 master-0 kubenswrapper[3173]: I1203 14:26:01.946800 3173 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:01Z","lastTransitionTime":"2025-12-03T14:26:01Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 03 14:26:01.956629 master-0 kubenswrapper[3173]: E1203 14:26:01.956571 3173 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:01.961855 master-0 kubenswrapper[3173]: I1203 14:26:01.961781 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:01.961855 master-0 kubenswrapper[3173]: I1203 14:26:01.961836 3173 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:01Z","lastTransitionTime":"2025-12-03T14:26:01Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 03 14:26:01.972835 master-0 kubenswrapper[3173]: E1203 14:26:01.972722 3173 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:01.978278 master-0 kubenswrapper[3173]: I1203 14:26:01.978211 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:01.978503 master-0 kubenswrapper[3173]: I1203 14:26:01.978279 3173 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:01Z","lastTransitionTime":"2025-12-03T14:26:01Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Dec 03 14:26:01.988074 master-0 kubenswrapper[3173]: E1203 14:26:01.987949 3173 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:01Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:01.988074 master-0 kubenswrapper[3173]: E1203 14:26:01.988036 3173 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 14:26:01.988074 master-0 kubenswrapper[3173]: E1203 14:26:01.988074 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.088697 master-0 kubenswrapper[3173]: E1203 14:26:02.088610 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.189926 master-0 kubenswrapper[3173]: E1203 14:26:02.189697 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.290518 master-0 kubenswrapper[3173]: E1203 14:26:02.290378 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.391687 master-0 kubenswrapper[3173]: E1203 14:26:02.391586 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.492395 master-0 kubenswrapper[3173]: E1203 14:26:02.492190 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.540826 master-0 kubenswrapper[3173]: I1203 14:26:02.540668 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:02.541380 master-0 kubenswrapper[3173]: I1203 14:26:02.540928 3173 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:26:02.542865 master-0 kubenswrapper[3173]: I1203 14:26:02.542794 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:26:02.542968 master-0 kubenswrapper[3173]: I1203 14:26:02.542877 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:26:02.542968 master-0 kubenswrapper[3173]: I1203 14:26:02.542896 3173 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:26:02.547748 master-0 kubenswrapper[3173]: I1203 14:26:02.547711 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:02.592991 master-0 kubenswrapper[3173]: E1203 14:26:02.592819 3173 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:02.682612 master-0 kubenswrapper[3173]: I1203 14:26:02.682502 3173 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:02.830347 master-0 kubenswrapper[3173]: I1203 14:26:02.830239 3173 apiserver.go:52] "Watching apiserver" Dec 03 14:26:02.858099 master-0 kubenswrapper[3173]: I1203 14:26:02.857406 3173 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:26:02.863187 master-0 kubenswrapper[3173]: I1203 14:26:02.862976 3173 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-ingress/router-default-54f97f57-rr9px","openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-ovn-kubernetes/ovnkube-node-txl6b","openshift-apiserver/apiserver-6985f84b49-v9vlg","openshift-catalogd/catalogd-controller-manager-754cfd84-qf898","openshift-kube-apiserver/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n","openshift-monitoring/thanos-querier-cc996c4bd-j4hzr","openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-controller-manager/installer-3-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-ingress-canary/ingress-canary-vkpv4","openshift-kube-scheduler/installer-5-master-0","openshift-machine-config-operator/machine-config-server-pvrfs","openshift-monitoring/metrics-server-555496955b-vpcbs","openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr","openshift-cluster-node-tuning-operator/tuned-7zkbg","openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl","openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29","openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz","openshift-dns/node-resolver-4xlhs","openshift-etcd/installer-1-master-0","openshift-monitoring/alertmanager-main-0","openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg","openshift-multus/network-metrics-daemon-ch7xd","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-kube-apiserver/installer-4-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j","openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w","openshift-etcd/etcd-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","openshift-console-operator/console-operator-77df56447c-vsrxx","openshift-etcd/installer-2-retry-1-master-0","openshift-marketplace/certified-operators-t8rt7","openshift-authentication/oauth-openshift-747bdb58b5-mn76f","openshift-console/console-6c9c84854-xf7nv","openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw","assisted-installer/assisted-installer-controller-stq5g","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-kube-apiserver/installer-6-master-0","openshift-service-ca/service-ca-6b8bb995f7-b68p8","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","openshift-kube-apiserver/kube-apiserver-master-0","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-multus/multus-kk4tm","openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq","openshift-dns/dns-default-5m4f8","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k","openshift-monitoring/prometheus-operator-565bdcb8-477pk","openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r","openshift-machine-config-operator/machine-config-daemon-2ztl9","openshift-monitoring/prometheus-k8s-0","openshift-monitoring/telemeter-client-764cbf5554-kftwv","openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l","openshift-marketplace/redhat-marketplace-ddwmn","openshift-marketplace/redhat-operators-6z4sc","openshift-network-node-identity/network-node-identity-c8csx","openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w","openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-image-registry/node-ca-4p4zh","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h","openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm","openshift-etcd/installer-2-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql","openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8","openshift-insights/insights-operator-59d99f9b7b-74sss","openshift-network-console/networking-console-plugin-7c696657b7-452tx","openshift-network-diagnostics/network-check-target-pcchm","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-kube-apiserver/installer-5-master-0","openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg","openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-machine-api/machine-api-operator-7486ff55f-wcnxg","openshift-monitoring/node-exporter-b62gf","openshift-kube-scheduler/installer-4-master-0","openshift-multus/multus-admission-controller-84c998f64f-8stq7","openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-marketplace/community-operators-7fwtv","openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml","openshift-network-operator/iptables-alerter-n24qb","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-console/downloads-6f5db8559b-96ljh","openshift-kube-scheduler/installer-6-master-0"] Dec 03 14:26:02.863673 master-0 kubenswrapper[3173]: I1203 14:26:02.863613 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 14:26:02.863725 master-0 kubenswrapper[3173]: I1203 14:26:02.863694 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:02.863823 master-0 kubenswrapper[3173]: E1203 14:26:02.863781 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:02.863868 master-0 kubenswrapper[3173]: I1203 14:26:02.863633 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:02.863965 master-0 kubenswrapper[3173]: I1203 14:26:02.863914 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:02.864051 master-0 kubenswrapper[3173]: E1203 14:26:02.864025 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:02.864097 master-0 kubenswrapper[3173]: I1203 14:26:02.864083 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:02.864163 master-0 kubenswrapper[3173]: E1203 14:26:02.864133 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:02.864204 master-0 kubenswrapper[3173]: E1203 14:26:02.864190 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:02.864334 master-0 kubenswrapper[3173]: I1203 14:26:02.864306 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:02.864394 master-0 kubenswrapper[3173]: I1203 14:26:02.864308 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:02.864544 master-0 kubenswrapper[3173]: I1203 14:26:02.864344 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:02.864544 master-0 kubenswrapper[3173]: E1203 14:26:02.864362 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:02.864613 master-0 kubenswrapper[3173]: E1203 14:26:02.864547 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:02.864613 master-0 kubenswrapper[3173]: I1203 14:26:02.864580 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:02.864689 master-0 kubenswrapper[3173]: E1203 14:26:02.864577 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:02.864689 master-0 kubenswrapper[3173]: I1203 14:26:02.864648 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:02.865155 master-0 kubenswrapper[3173]: I1203 14:26:02.865125 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:02.865213 master-0 kubenswrapper[3173]: E1203 14:26:02.865184 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:02.865318 master-0 kubenswrapper[3173]: I1203 14:26:02.865274 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:02.865318 master-0 kubenswrapper[3173]: E1203 14:26:02.865306 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:02.865387 master-0 kubenswrapper[3173]: E1203 14:26:02.865356 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:02.865930 master-0 kubenswrapper[3173]: I1203 14:26:02.865892 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:02.866205 master-0 kubenswrapper[3173]: E1203 14:26:02.865987 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:02.866205 master-0 kubenswrapper[3173]: I1203 14:26:02.866076 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:02.866205 master-0 kubenswrapper[3173]: I1203 14:26:02.866141 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:02.866205 master-0 kubenswrapper[3173]: I1203 14:26:02.866157 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:02.866205 master-0 kubenswrapper[3173]: I1203 14:26:02.866190 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866175 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866204 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866227 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866271 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866319 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: I1203 14:26:02.866650 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: I1203 14:26:02.866667 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866686 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:02.866836 master-0 kubenswrapper[3173]: E1203 14:26:02.866735 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:02.867734 master-0 kubenswrapper[3173]: I1203 14:26:02.867120 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:02.867876 master-0 kubenswrapper[3173]: I1203 14:26:02.867842 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: I1203 14:26:02.868152 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: I1203 14:26:02.868191 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: E1203 14:26:02.868284 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: I1203 14:26:02.868432 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: I1203 14:26:02.868746 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:02.868831 master-0 kubenswrapper[3173]: E1203 14:26:02.868835 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:02.869342 master-0 kubenswrapper[3173]: I1203 14:26:02.869133 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:02.869342 master-0 kubenswrapper[3173]: I1203 14:26:02.869177 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:02.869342 master-0 kubenswrapper[3173]: E1203 14:26:02.869254 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:02.869851 master-0 kubenswrapper[3173]: I1203 14:26:02.869779 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:02.870039 master-0 kubenswrapper[3173]: I1203 14:26:02.869997 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:26:02.870104 master-0 kubenswrapper[3173]: I1203 14:26:02.870053 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:26:02.870141 master-0 kubenswrapper[3173]: I1203 14:26:02.870112 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:26:02.870246 master-0 kubenswrapper[3173]: I1203 14:26:02.870153 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:02.870246 master-0 kubenswrapper[3173]: I1203 14:26:02.870174 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:02.870246 master-0 kubenswrapper[3173]: I1203 14:26:02.870209 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:02.870340 master-0 kubenswrapper[3173]: E1203 14:26:02.870259 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:02.870588 master-0 kubenswrapper[3173]: E1203 14:26:02.870398 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:02.870588 master-0 kubenswrapper[3173]: I1203 14:26:02.870442 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:02.870588 master-0 kubenswrapper[3173]: I1203 14:26:02.870449 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 14:26:02.870588 master-0 kubenswrapper[3173]: E1203 14:26:02.870493 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871058 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: E1203 14:26:02.871102 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871140 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871165 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: E1203 14:26:02.871182 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871424 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871491 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871548 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: E1203 14:26:02.871544 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871589 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:26:02.871715 master-0 kubenswrapper[3173]: I1203 14:26:02.871652 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:26:02.872277 master-0 kubenswrapper[3173]: I1203 14:26:02.872023 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:02.872277 master-0 kubenswrapper[3173]: I1203 14:26:02.872109 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:02.872277 master-0 kubenswrapper[3173]: I1203 14:26:02.872184 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 14:26:02.872277 master-0 kubenswrapper[3173]: I1203 14:26:02.872253 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:26:02.872465 master-0 kubenswrapper[3173]: I1203 14:26:02.872312 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:02.872465 master-0 kubenswrapper[3173]: I1203 14:26:02.872461 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:26:02.872663 master-0 kubenswrapper[3173]: I1203 14:26:02.872621 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:02.872663 master-0 kubenswrapper[3173]: I1203 14:26:02.872646 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: E1203 14:26:02.872837 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: I1203 14:26:02.872968 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: I1203 14:26:02.873018 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: I1203 14:26:02.873033 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: E1203 14:26:02.873026 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:02.873182 master-0 kubenswrapper[3173]: I1203 14:26:02.873175 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:26:02.873443 master-0 kubenswrapper[3173]: I1203 14:26:02.873219 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:26:02.873443 master-0 kubenswrapper[3173]: I1203 14:26:02.873290 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:26:02.873443 master-0 kubenswrapper[3173]: I1203 14:26:02.873313 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:26:02.873588 master-0 kubenswrapper[3173]: I1203 14:26:02.873501 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:02.873588 master-0 kubenswrapper[3173]: I1203 14:26:02.873520 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 14:26:02.873588 master-0 kubenswrapper[3173]: E1203 14:26:02.873549 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:02.874044 master-0 kubenswrapper[3173]: I1203 14:26:02.873942 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: I1203 14:26:02.874200 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: E1203 14:26:02.874255 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: I1203 14:26:02.874532 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: I1203 14:26:02.874587 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: I1203 14:26:02.874661 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:26:02.874740 master-0 kubenswrapper[3173]: I1203 14:26:02.874699 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:26:02.875067 master-0 kubenswrapper[3173]: I1203 14:26:02.874757 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:26:02.875117 master-0 kubenswrapper[3173]: I1203 14:26:02.875065 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:02.875166 master-0 kubenswrapper[3173]: E1203 14:26:02.875135 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:02.875325 master-0 kubenswrapper[3173]: I1203 14:26:02.875270 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:26:02.875405 master-0 kubenswrapper[3173]: I1203 14:26:02.875350 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:02.875445 master-0 kubenswrapper[3173]: E1203 14:26:02.875403 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:02.875445 master-0 kubenswrapper[3173]: I1203 14:26:02.875411 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:26:02.875737 master-0 kubenswrapper[3173]: I1203 14:26:02.875545 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:26:02.875737 master-0 kubenswrapper[3173]: I1203 14:26:02.875547 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:26:02.875737 master-0 kubenswrapper[3173]: I1203 14:26:02.875594 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:26:02.875737 master-0 kubenswrapper[3173]: I1203 14:26:02.875694 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:26:02.875897 master-0 kubenswrapper[3173]: I1203 14:26:02.875787 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:26:02.875897 master-0 kubenswrapper[3173]: I1203 14:26:02.875814 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:02.876639 master-0 kubenswrapper[3173]: I1203 14:26:02.876596 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:26:02.877957 master-0 kubenswrapper[3173]: I1203 14:26:02.877139 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:26:02.877957 master-0 kubenswrapper[3173]: I1203 14:26:02.877926 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:02.878727 master-0 kubenswrapper[3173]: I1203 14:26:02.878173 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:26:02.878727 master-0 kubenswrapper[3173]: E1203 14:26:02.878261 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:02.878727 master-0 kubenswrapper[3173]: I1203 14:26:02.878362 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:26:02.878727 master-0 kubenswrapper[3173]: I1203 14:26:02.878624 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: E1203 14:26:02.879040 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.879072 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.879130 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.879508 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: E1203 14:26:02.879581 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.879617 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: E1203 14:26:02.879676 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.879738 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.880059 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: E1203 14:26:02.880146 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: I1203 14:26:02.880160 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:02.880419 master-0 kubenswrapper[3173]: E1203 14:26:02.880217 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:02.880965 master-0 kubenswrapper[3173]: I1203 14:26:02.880884 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:02.881040 master-0 kubenswrapper[3173]: E1203 14:26:02.880965 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:02.881924 master-0 kubenswrapper[3173]: I1203 14:26:02.881883 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:26:02.884739 master-0 kubenswrapper[3173]: I1203 14:26:02.883889 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:02.884820 master-0 kubenswrapper[3173]: E1203 14:26:02.884787 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:02.885263 master-0 kubenswrapper[3173]: I1203 14:26:02.885000 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:02.885263 master-0 kubenswrapper[3173]: E1203 14:26:02.885115 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:02.885912 master-0 kubenswrapper[3173]: I1203 14:26:02.885831 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:02.886187 master-0 kubenswrapper[3173]: I1203 14:26:02.885981 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:02.886187 master-0 kubenswrapper[3173]: E1203 14:26:02.886056 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:02.886461 master-0 kubenswrapper[3173]: I1203 14:26:02.886441 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:02.886692 master-0 kubenswrapper[3173]: I1203 14:26:02.886527 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:02.887809 master-0 kubenswrapper[3173]: E1203 14:26:02.886794 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:02.887809 master-0 kubenswrapper[3173]: I1203 14:26:02.887278 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:02.887809 master-0 kubenswrapper[3173]: E1203 14:26:02.887321 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:02.887809 master-0 kubenswrapper[3173]: I1203 14:26:02.887720 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:26:02.887968 master-0 kubenswrapper[3173]: I1203 14:26:02.887824 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:02.887968 master-0 kubenswrapper[3173]: I1203 14:26:02.887901 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:26:02.888068 master-0 kubenswrapper[3173]: I1203 14:26:02.888041 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:02.888561 master-0 kubenswrapper[3173]: I1203 14:26:02.888147 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:26:02.888561 master-0 kubenswrapper[3173]: E1203 14:26:02.888369 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:02.888561 master-0 kubenswrapper[3173]: I1203 14:26:02.888507 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:02.888561 master-0 kubenswrapper[3173]: E1203 14:26:02.888535 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:02.889525 master-0 kubenswrapper[3173]: I1203 14:26:02.888841 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:02.889525 master-0 kubenswrapper[3173]: I1203 14:26:02.888842 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:26:02.889525 master-0 kubenswrapper[3173]: E1203 14:26:02.889136 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: I1203 14:26:02.889865 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: I1203 14:26:02.889873 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: I1203 14:26:02.890141 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: I1203 14:26:02.890190 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: E1203 14:26:02.890224 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:02.890276 master-0 kubenswrapper[3173]: I1203 14:26:02.890223 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:02.890502 master-0 kubenswrapper[3173]: E1203 14:26:02.890297 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:02.890502 master-0 kubenswrapper[3173]: I1203 14:26:02.890312 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:26:02.890642 master-0 kubenswrapper[3173]: I1203 14:26:02.890621 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:26:02.891221 master-0 kubenswrapper[3173]: I1203 14:26:02.890759 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:26:02.891221 master-0 kubenswrapper[3173]: I1203 14:26:02.890825 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:26:02.891221 master-0 kubenswrapper[3173]: I1203 14:26:02.890939 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:26:02.891221 master-0 kubenswrapper[3173]: I1203 14:26:02.891149 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:26:02.891401 master-0 kubenswrapper[3173]: I1203 14:26:02.891216 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:02.891401 master-0 kubenswrapper[3173]: I1203 14:26:02.891298 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:26:02.891401 master-0 kubenswrapper[3173]: I1203 14:26:02.891353 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:02.891497 master-0 kubenswrapper[3173]: E1203 14:26:02.891432 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:02.891497 master-0 kubenswrapper[3173]: I1203 14:26:02.891460 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:26:02.891645 master-0 kubenswrapper[3173]: I1203 14:26:02.891597 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:02.891708 master-0 kubenswrapper[3173]: E1203 14:26:02.891680 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:02.892283 master-0 kubenswrapper[3173]: I1203 14:26:02.892141 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:26:02.892519 master-0 kubenswrapper[3173]: I1203 14:26:02.892492 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:02.892697 master-0 kubenswrapper[3173]: I1203 14:26:02.892571 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:26:02.893249 master-0 kubenswrapper[3173]: I1203 14:26:02.892954 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:02.893249 master-0 kubenswrapper[3173]: E1203 14:26:02.893039 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:02.893373 master-0 kubenswrapper[3173]: I1203 14:26:02.893341 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:26:02.893373 master-0 kubenswrapper[3173]: I1203 14:26:02.893357 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:02.893462 master-0 kubenswrapper[3173]: E1203 14:26:02.893425 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:02.893599 master-0 kubenswrapper[3173]: I1203 14:26:02.893562 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:26:02.894020 master-0 kubenswrapper[3173]: I1203 14:26:02.893709 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:02.894020 master-0 kubenswrapper[3173]: E1203 14:26:02.893761 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:02.894020 master-0 kubenswrapper[3173]: I1203 14:26:02.893765 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:02.894020 master-0 kubenswrapper[3173]: E1203 14:26:02.893926 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:02.894497 master-0 kubenswrapper[3173]: I1203 14:26:02.894174 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:26:02.894497 master-0 kubenswrapper[3173]: I1203 14:26:02.894182 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:02.894497 master-0 kubenswrapper[3173]: E1203 14:26:02.894330 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:02.894497 master-0 kubenswrapper[3173]: I1203 14:26:02.894368 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.894517 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.894639 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: E1203 14:26:02.894683 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.894793 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.894826 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.894970 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:02.895533 master-0 kubenswrapper[3173]: I1203 14:26:02.895296 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:26:02.896515 master-0 kubenswrapper[3173]: I1203 14:26:02.895959 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:02.896515 master-0 kubenswrapper[3173]: E1203 14:26:02.896102 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:02.896515 master-0 kubenswrapper[3173]: I1203 14:26:02.895959 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:02.896515 master-0 kubenswrapper[3173]: E1203 14:26:02.896192 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:02.896515 master-0 kubenswrapper[3173]: I1203 14:26:02.896429 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:26:02.896875 master-0 kubenswrapper[3173]: I1203 14:26:02.896760 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: I1203 14:26:02.896979 3173 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: I1203 14:26:02.897115 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: I1203 14:26:02.897166 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: E1203 14:26:02.897267 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: E1203 14:26:02.897302 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:02.897589 master-0 kubenswrapper[3173]: I1203 14:26:02.897474 3173 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:26:02.897803 master-0 kubenswrapper[3173]: I1203 14:26:02.897784 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:02.897838 master-0 kubenswrapper[3173]: I1203 14:26:02.897809 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:26:02.897838 master-0 kubenswrapper[3173]: E1203 14:26:02.897819 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:02.897925 master-0 kubenswrapper[3173]: I1203 14:26:02.897845 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:26:02.897925 master-0 kubenswrapper[3173]: I1203 14:26:02.897852 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:26:02.898024 master-0 kubenswrapper[3173]: I1203 14:26:02.897950 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:26:02.899369 master-0 kubenswrapper[3173]: I1203 14:26:02.899340 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:26:02.899542 master-0 kubenswrapper[3173]: I1203 14:26:02.899477 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:26:02.899594 master-0 kubenswrapper[3173]: I1203 14:26:02.899577 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:02.899665 master-0 kubenswrapper[3173]: E1203 14:26:02.899637 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:02.900404 master-0 kubenswrapper[3173]: I1203 14:26:02.900371 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:02.900562 master-0 kubenswrapper[3173]: E1203 14:26:02.900532 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:02.900877 master-0 kubenswrapper[3173]: I1203 14:26:02.900853 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:02.900928 master-0 kubenswrapper[3173]: E1203 14:26:02.900906 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:02.901684 master-0 kubenswrapper[3173]: I1203 14:26:02.901658 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:02.901747 master-0 kubenswrapper[3173]: E1203 14:26:02.901706 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:02.902211 master-0 kubenswrapper[3173]: I1203 14:26:02.902155 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:02.902339 master-0 kubenswrapper[3173]: E1203 14:26:02.902295 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:02.905036 master-0 kubenswrapper[3173]: I1203 14:26:02.904860 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"803897bb-580e-4f7a-9be2-583fc607d1f6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-olm-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/operand-assets\\\",\\\"name\\\":\\\"operand-assets\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fw8h8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-olm-operator\"/\"cluster-olm-operator-589f5cdc9d-5h2kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:02.926825 master-0 kubenswrapper[3173]: I1203 14:26:02.926320 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/alertmanager-main-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d838c1a-22e2-4096-9739-7841ef7d06ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"alertmanager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/certs\\\",\\\"name\\\":\\\"tls-assets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/alertmanager\\\",\\\"name\\\":\\\"alertmanager-main-db\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"alertmanager-trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"config-reloader\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-metric\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prom-label-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z96q6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"alertmanager-main-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:02.926825 master-0 kubenswrapper[3173]: I1203 14:26:02.926767 3173 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 03 14:26:02.927201 master-0 kubenswrapper[3173]: I1203 14:26:02.926838 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 03 14:26:02.947064 master-0 kubenswrapper[3173]: I1203 14:26:02.946943 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"201c1f63-757e-4efd-8eef-8edaf196315f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:43Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:40Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:40Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:40Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"startTime\\\":\\\"2025-12-03T14:25:40Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:02.948296 master-0 kubenswrapper[3173]: I1203 14:26:02.948090 3173 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 14:26:02.960501 master-0 kubenswrapper[3173]: I1203 14:26:02.960429 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-n24qb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kcpv9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-n24qb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:02.974560 master-0 kubenswrapper[3173]: I1203 14:26:02.973988 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmpfs\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/tmp/k8s-webhook-server/serving-certs\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p7ss6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-7c64dd9d8b-49skr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:02.990191 master-0 kubenswrapper[3173]: I1203 14:26:02.990106 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"33a557d1-cdd9-47ff-afbd-a301e7f589a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/client-ca\\\",\\\"name\\\":\\\"client-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dmqvl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-74cff6cf84-bh8rz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.003946 master-0 kubenswrapper[3173]: I1203 14:26:03.003872 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1933a6a0-6ccc-4629-a0e5-a5a4b4575771\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29412855-jmbvv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.018971 master-0 kubenswrapper[3173]: I1203 14:26:03.018887 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-6z4sc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"911f6333-cdb0-425c-b79b-f892444b7097\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhf9r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-6z4sc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.033195 master-0 kubenswrapper[3173]: I1203 14:26:03.033108 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6fa89f-268c-477b-9f04-238d2305cc89\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcc-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-955zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-955zg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-74cddd4fb5-phk6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.046379 master-0 kubenswrapper[3173]: I1203 14:26:03.046295 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcc78129-4a81-410e-9a42-b12043b5a75a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x22gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x22gr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-85dbd94574-8jfp5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.061320 master-0 kubenswrapper[3173]: I1203 14:26:03.061231 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/private\\\",\\\"name\\\":\\\"default-certificate\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/haproxy/conf/metrics-auth\\\",\\\"name\\\":\\\"stats-auth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-certs\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57rrp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ingress\"/\"router-default-54f97f57-rr9px\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.100675 master-0 kubenswrapper[3173]: I1203 14:26:03.100414 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-k8s-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56649bd4-ac30-4a70-8024-772294fede88\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus config-reloader thanos-sidecar kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-thanos]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus config-reloader thanos-sidecar kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-thanos]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"config-reloader\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/prometheus/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/config_out\\\",\\\"name\\\":\\\"config-out\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/rules/prometheus-k8s-rulefiles-0\\\",\\\"name\\\":\\\"prometheus-k8s-rulefiles-0\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-thanos\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-thanos-sidecar-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-prometheus-k8s-kube-rbac-proxy-web\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"prometheus-trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/config_out\\\",\\\"name\\\":\\\"config-out\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/certs\\\",\\\"name\\\":\\\"tls-assets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/prometheus\\\",\\\"name\\\":\\\"prometheus-k8s-db\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-tls\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls\\\",\\\"name\\\":\\\"secret-prometheus-k8s-thanos-sidecar-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-prometheus-k8s-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/metrics-client-certs\\\",\\\"name\\\":\\\"secret-metrics-client-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"configmap-serving-certs-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/kubelet-serving-ca-bundle\\\",\\\"name\\\":\\\"configmap-kubelet-serving-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/metrics-client-ca\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/rules/prometheus-k8s-rulefiles-0\\\",\\\"name\\\":\\\"prometheus-k8s-rulefiles-0\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"thanos-sidecar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/grpc\\\",\\\"name\\\":\\\"secret-grpc-tls\\\"},{\\\"mountPath\\\":\\\"/etc/thanos/config\\\",\\\"name\\\":\\\"thanos-prometheus-http-client-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cjpnb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-k8s-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.105783 master-0 kubenswrapper[3173]: I1203 14:26:03.105740 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:03.112988 master-0 kubenswrapper[3173]: I1203 14:26:03.112914 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9362c4d7-96a9-4400-b7d3-fd4f196b3d0f\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-3-retry-1-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.120721 master-0 kubenswrapper[3173]: E1203 14:26:03.120671 3173 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:03.135801 master-0 kubenswrapper[3173]: I1203 14:26:03.135600 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a561f6b2-988e-4e71-b615-4c1d7331d011\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:42Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:44Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:56Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:25:40Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7424a3adff7dce95e229689db3a097554825a0a1b6fc1da3f511760d76ff1a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be55425be92502579ba54e0a7029374fa5869946a681a8d47fee9f3e2abb52ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e40eeccb22154afc36511e259a0bbd0340bbb8c152ccc392f07b9b63e9286432\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://273350e7b0aeceae0168f90588eb07e0ee52a413f6434e0abfb74158cc482c9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb83f75a316413d7cd4039c1ecf1652c36407775bf11a763ce99c299576a480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:25:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:25:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:25:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:25:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:25:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:25:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:25:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"startTime\\\":\\\"2025-12-03T14:25:40Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.148032 master-0 kubenswrapper[3173]: I1203 14:26:03.147936 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adbcce01-7282-4a75-843a-9623060346f0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jkbcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-7c4697b5f5-9f69p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.159096 master-0 kubenswrapper[3173]: I1203 14:26:03.158994 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f723d97-5c65-4ae7-9085-26db8b4f2f52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator graceful-termination]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator graceful-termination]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"graceful-termination\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwv7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wwv7s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-5bcf58cf9c-dvklg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.178311 master-0 kubenswrapper[3173]: I1203 14:26:03.178214 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9f484c1-1564-49c7-a43d-bd8b971cea20\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"machine-api-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjbsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/machine-api-operator-config/images\\\",\\\"name\\\":\\\"images\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjbsl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-7486ff55f-wcnxg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.191723 master-0 kubenswrapper[3173]: I1203 14:26:03.191624 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbch4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6348dedc0513e2c77aed5601dc5969274ac7c75fadd32b7280b3ec06e76b93bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:14:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:11:31Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbch4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2ztl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.202991 master-0 kubenswrapper[3173]: I1203 14:26:03.202911 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec89938d-35a5-46ba-8c63-12489db18cbd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"etc-ssl-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cvo/updatepayloads\\\",\\\"name\\\":\\\"etc-cvo-updatepayloads\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/service-ca\\\",\\\"name\\\":\\\"service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-7c49fbfc6f-7krqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.219842 master-0 kubenswrapper[3173]: I1203 14:26:03.219663 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"829d285f-d532-45e4-b1ec-54adbc21b9f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [telemeter-client reload kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [telemeter-client reload kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"telemeter-client-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"secret-telemeter-client-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"reload\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"serving-certs-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28b3ba29ff038781d3742df4ab05fac69a92cf2bf058c25487e47a2f4ff02627\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"telemeter-client\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"serving-certs-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/telemeter\\\",\\\"name\\\":\\\"secret-telemeter-client\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"federate-client-tls\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"telemeter-trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"telemeter-client-764cbf5554-kftwv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.233030 master-0 kubenswrapper[3173]: I1203 14:26:03.232880 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-5-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5db1386-71f6-4b27-b686-5a3bb35659fa\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-5-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.241132 master-0 kubenswrapper[3173]: I1203 14:26:03.241055 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4xlhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42c95e54-b4ba-4b19-a97c-abcec840ac5d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b6tjl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-dns\"/\"node-resolver-4xlhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.253345 master-0 kubenswrapper[3173]: I1203 14:26:03.253271 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-5f574c6c79-86bh9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.263892 master-0 kubenswrapper[3173]: I1203 14:26:03.263827 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3c1ebb9-f052-410b-a999-45e9b75b0e58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvzf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mvzf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-ch7xd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.275845 master-0 kubenswrapper[3173]: I1203 14:26:03.275777 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"prometheus-operator-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"prometheus-operator-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97xsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97xsn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-operator-565bdcb8-477pk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.287556 master-0 kubenswrapper[3173]: I1203 14:26:03.287480 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44af6af5-cecb-4dc4-b793-e8e350f8a47d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"image-registry-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kk4tx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-65dc4bcb88-96zcz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.298283 master-0 kubenswrapper[3173]: I1203 14:26:03.298219 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v429m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-pcchm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.338831 master-0 kubenswrapper[3173]: I1203 14:26:03.338720 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36da3c2f-860c-4188-a7d7-5b615981a835\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/signing-key\\\",\\\"name\\\":\\\"signing-key\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/signing-cabundle\\\",\\\"name\\\":\\\"signing-cabundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jzlgx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-6b8bb995f7-b68p8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.375280 master-0 kubenswrapper[3173]: I1203 14:26:03.374855 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"690d1f81-7b1f-4fd0-9b6e-154c9687c744\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"baremetal-kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/baremetal-kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-baremetal-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wh8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b671df8b2d0b6b869c0c59899a5d812092d7f4173a1907580514cf63b3d2cf1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T14:20:31Z\\\",\\\"message\\\":\\\"E1203 14:20:31.392191 1 main.go:144] \\\\\\\"unable to get enabled features\\\\\\\" err=\\\\\\\"unable to determine Platform: the server was unable to return a response in the time allotted, but may still be processing the request (get infrastructures.config.openshift.io cluster)\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T14:19:31Z\\\"}},\\\"name\\\":\\\"cluster-baremetal-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/images\\\",\\\"name\\\":\\\"images\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wh8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-baremetal-operator-5fdc576499-j2n8j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.432687 master-0 kubenswrapper[3173]: I1203 14:26:03.432571 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj4f8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-68c95b6cf5-fmdmz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.475673 master-0 kubenswrapper[3173]: I1203 14:26:03.475529 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15782f65-35d2-4e95-bf49-81541c683ffe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"tuned\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/modprobe.d\\\",\\\"name\\\":\\\"etc-modprobe-d\\\"},{\\\"mountPath\\\":\\\"/etc/sysconfig\\\",\\\"name\\\":\\\"etc-sysconfig\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.d\\\",\\\"name\\\":\\\"etc-sysctl-d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.conf\\\",\\\"name\\\":\\\"etc-sysctl-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd\\\",\\\"name\\\":\\\"etc-systemd\\\"},{\\\"mountPath\\\":\\\"/etc/tuned\\\",\\\"name\\\":\\\"etc-tuned\\\"},{\\\"mountPath\\\":\\\"/run\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/sys\\\",\\\"name\\\":\\\"sys\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5jtgh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"tuned-7zkbg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.513817 master-0 kubenswrapper[3173]: I1203 14:26:03.513715 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"24dfafc9-86a9-450e-ac62-a871138106c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m789m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-57fd58bc7b-kktql\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.554481 master-0 kubenswrapper[3173]: I1203 14:26:03.554369 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b95a5a6-db93-4a58-aaff-3619d130c8cb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-storage-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-storage-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nc9nj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"cluster-storage-operator-f84784664-ntb9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.594335 master-0 kubenswrapper[3173]: I1203 14:26:03.594240 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [metrics-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [metrics-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"metrics-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-metrics-server-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/metrics-client-certs\\\",\\\"name\\\":\\\"secret-metrics-client-certs\\\"},{\\\"mountPath\\\":\\\"/etc/tls/kubelet-serving-ca-bundle\\\",\\\"name\\\":\\\"configmap-kubelet-serving-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/audit\\\",\\\"name\\\":\\\"metrics-server-audit-profiles\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/metrics-server\\\",\\\"name\\\":\\\"audit-log\\\"},{\\\"mountPath\\\":\\\"/etc/client-ca-bundle\\\",\\\"name\\\":\\\"client-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mq4w9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"metrics-server-555496955b-vpcbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.633031 master-0 kubenswrapper[3173]: I1203 14:26:03.632823 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/installer-2-retry-1-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f539ea7-39a7-422f-82d9-7603eede84cf\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd\"/\"installer-2-retry-1-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.646625 master-0 kubenswrapper[3173]: I1203 14:26:03.646599 3173 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:03.700657 master-0 kubenswrapper[3173]: I1203 14:26:03.700574 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7663a25e-236d-4b1d-83ce-733ab146dee3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cluster-autoscaler-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cluster-autoscaler-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-autoscaler-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-autoscaler-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"auth-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ltsnd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-autoscaler-operator-7f88444875-6dk29\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.733954 master-0 kubenswrapper[3173]: I1203 14:26:03.733827 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"55351b08-d46d-4327-aa5e-ae17fdffdfb5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://868b4cdc8a4fe91b5ca34a18b1d879aa41665f52be1f78b8a23f6bad9d2f2106\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:19:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:08:57Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nxt87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-7d67745bb7-dwcxb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.772107 master-0 kubenswrapper[3173]: I1203 14:26:03.771906 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6935a3f8-723e-46e6-8498-483f34bf0825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wc6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08acd077553f72d39a3338430ca8c622c61126e0810d50f76c2ab4bda2d6067f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-03T14:19:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-03T14:08:23Z\\\"}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8wc6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-f9f7f4946-48mrg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.818533 master-0 kubenswrapper[3173]: I1203 14:26:03.818398 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"da583723-b3ad-4a6f-b586-09b739bd7f8c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beebef07f0cead91e9334247c292ae81789441d58dee39e91d6971b5f65df56\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T14:18:08Z\\\",\\\"message\\\":\\\"nnection refused\\\\\\\"\\\\nI1203 14:15:21.624163 1 reflector.go:543] \\\\\\\"Watch closed\\\\\\\" logger=\\\\\\\"controller-runtime.cache\\\\\\\" reflector=\\\\\\\"sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114\\\\\\\" type=\\\\\\\"*v1.CertificateSigningRequest\\\\\\\" err=\\\\\\\"too old resource version: 18667 (18834)\\\\\\\"\\\\nI1203 14:15:41.349039 1 reflector.go:403] \\\\\\\"Listing and watching\\\\\\\" logger=\\\\\\\"controller-runtime.cache\\\\\\\" type=\\\\\\\"*v1.CertificateSigningRequest\\\\\\\" reflector=\\\\\\\"sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114\\\\\\\"\\\\nI1203 14:15:41.352500 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" logger=\\\\\\\"controller-runtime.cache\\\\\\\" type=\\\\\\\"*v1.CertificateSigningRequest\\\\\\\" reflector=\\\\\\\"sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114\\\\\\\"\\\\nI1203 14:15:41.353858 1 approver.go:230] Finished syncing CSR csr-4jc9b for unknown node in 416.242µs\\\\nI1203 14:15:41.354000 1 approver.go:230] Finished syncing CSR csr-6rbmd for unknown node in 44.451µs\\\\nE1203 14:17:38.067554 1 leaderelection.go:429] Failed to update lock optimistically: Put \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": context deadline exceeded, falling back to slow path\\\\nE1203 14:17:53.066991 1 leaderelection.go:436] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": context deadline exceeded\\\\nI1203 14:17:53.067111 1 leaderelection.go:297] failed to renew lease openshift-network-node-identity/ovnkube-identity: context deadline exceeded\\\\nE1203 14:18:08.068042 1 leaderelection.go:322] Failed to release lock: Put \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": context deadline exceeded\\\\nF1203 14:18:08.068363 1 ovnkubeidentity.go:309] error running approver: leader election lost\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T14:08:22Z\\\"}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqnb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gqnb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-c8csx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.858731 master-0 kubenswrapper[3173]: I1203 14:26:03.858631 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p6dpf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-6964bb78b7-g4lv2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.893117 master-0 kubenswrapper[3173]: I1203 14:26:03.892959 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [download-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"download-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5nch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console\"/\"downloads-6f5db8559b-96ljh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.936970 master-0 kubenswrapper[3173]: I1203 14:26:03.936780 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a12409a-0be3-4023-9df3-a0f091aac8dc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [thanos-query kube-rbac-proxy-web kube-rbac-proxy prom-label-proxy kube-rbac-proxy-rules kube-rbac-proxy-metrics]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [thanos-query kube-rbac-proxy-web kube-rbac-proxy prom-label-proxy kube-rbac-proxy-rules kube-rbac-proxy-metrics]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-thanos-querier-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-thanos-querier-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-thanos-querier-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-thanos-querier-kube-rbac-proxy-metrics\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-rules\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-thanos-querier-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-thanos-querier-kube-rbac-proxy-rules\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-thanos-querier-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-thanos-querier-kube-rbac-proxy-web\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prom-label-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"thanos-query\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/grpc\\\",\\\"name\\\":\\\"secret-grpc-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wddf4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"thanos-querier-cc996c4bd-j4hzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.973701 master-0 kubenswrapper[3173]: I1203 14:26:03.973554 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [snapshot-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [snapshot-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://adcadf77d5e60e9ed13ace0602115805c594e9b0d06238f78faad31846eb01c8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T14:20:17Z\\\",\\\"message\\\":\\\"I1203 14:19:47.782609 1 feature_gate.go:387] feature gates: {map[]}\\\\nI1203 14:19:47.782801 1 main.go:169] Version: accf32dab5e4a0e26be3c53b65aff728f400aade\\\\nI1203 14:19:47.784031 1 main.go:220] Start NewCSISnapshotController with kubeconfig [] resyncPeriod [15m0s]\\\\nE1203 14:20:17.809399 1 main.go:104] Failed to list v1 volumesnapshotclasses with error=Get \\\\\\\"https://172.30.0.1:443/apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses?limit=1\\\\\\\": context deadline exceeded\\\\nE1203 14:20:17.809528 1 main.go:246] Exiting due to failure to ensure CRDs exist during startup: context deadline exceeded\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T14:19:47Z\\\"}},\\\"name\\\":\\\"snapshot-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqkdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-86897dd478-qqwh7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:03.997943 master-0 kubenswrapper[3173]: I1203 14:26:03.997732 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:03.997943 master-0 kubenswrapper[3173]: I1203 14:26:03.997756 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:03.997943 master-0 kubenswrapper[3173]: I1203 14:26:03.997823 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:03.997943 master-0 kubenswrapper[3173]: I1203 14:26:03.997844 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: E1203 14:26:03.998103 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998136 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998183 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998233 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998235 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998243 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998289 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:03.998360 master-0 kubenswrapper[3173]: I1203 14:26:03.998295 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998300 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998408 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998303 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998237 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: E1203 14:26:03.998405 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998454 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998472 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998421 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: I1203 14:26:03.998667 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: E1203 14:26:03.998678 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:03.998863 master-0 kubenswrapper[3173]: E1203 14:26:03.998774 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:03.999434 master-0 kubenswrapper[3173]: E1203 14:26:03.998904 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:03.999434 master-0 kubenswrapper[3173]: E1203 14:26:03.999033 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:03.999434 master-0 kubenswrapper[3173]: E1203 14:26:03.999191 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:03.999434 master-0 kubenswrapper[3173]: E1203 14:26:03.999294 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:03.999628 master-0 kubenswrapper[3173]: E1203 14:26:03.999508 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:03.999677 master-0 kubenswrapper[3173]: E1203 14:26:03.999632 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:03.999841 master-0 kubenswrapper[3173]: E1203 14:26:03.999787 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:03.999925 master-0 kubenswrapper[3173]: E1203 14:26:03.999905 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:04.000048 master-0 kubenswrapper[3173]: E1203 14:26:03.999983 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:04.000157 master-0 kubenswrapper[3173]: E1203 14:26:04.000111 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:04.000252 master-0 kubenswrapper[3173]: E1203 14:26:04.000216 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:04.000315 master-0 kubenswrapper[3173]: E1203 14:26:04.000287 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:04.000474 master-0 kubenswrapper[3173]: E1203 14:26:04.000418 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:04.000692 master-0 kubenswrapper[3173]: E1203 14:26:04.000624 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:04.000767 master-0 kubenswrapper[3173]: E1203 14:26:04.000747 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:04.015490 master-0 kubenswrapper[3173]: I1203 14:26:04.015395 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ncwtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-ddwmn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.051642 master-0 kubenswrapper[3173]: I1203 14:26:04.051559 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/installer-3-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bad1b1ab-3ff1-45f4-86ce-1a5713c59ef8\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager\"/\"installer-3-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.096451 master-0 kubenswrapper[3173]: I1203 14:26:04.096330 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa79e15-1875-4865-b5e0-aecd4c447bad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"package-server-manager-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7q659\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7q659\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-75b4d49d4c-h599p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.108444 master-0 kubenswrapper[3173]: I1203 14:26:04.108402 3173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:04.135760 master-0 kubenswrapper[3173]: I1203 14:26:04.135678 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-t8rt7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a192c38a-4bfa-40fe-9a2d-d48260cf6443\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fn7fm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-t8rt7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.178798 master-0 kubenswrapper[3173]: I1203 14:26:04.178697 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/node-exporter-b62gf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-exporter kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-exporter kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"node-exporter-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"node-exporter-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tqqf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-exporter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/sys\\\",\\\"name\\\":\\\"sys\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/root\\\",\\\"name\\\":\\\"root\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/node_exporter/textfile\\\",\\\"name\\\":\\\"node-exporter-textfile\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tqqf2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-monitoring\"/\"node-exporter-b62gf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.216143 master-0 kubenswrapper[3173]: I1203 14:26:04.216050 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19c2a40b-213c-42f1-9459-87c2e780a75f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbdtx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-42hmk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.257668 master-0 kubenswrapper[3173]: I1203 14:26:04.257415 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-kk4tm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c777c9de-1ace-46be-b5c2-c71d252f53f4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5fn5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-kk4tm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.296623 master-0 kubenswrapper[3173]: I1203 14:26:04.296546 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/installer-2-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3165c60b-3cd2-4bda-8c55-aecf00bef18d\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd\"/\"installer-2-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.333786 master-0 kubenswrapper[3173]: I1203 14:26:04.333708 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52100521-67e9-40c9-887c-eda6560f06e0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgq6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-7978bf889c-n64v4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.376665 master-0 kubenswrapper[3173]: I1203 14:26:04.376503 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://664c3ff2947feb1a70318c5ff0027e09769155ce2375556704dbb4dae528edde\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-03T14:19:28Z\\\",\\\"message\\\":\\\"ble\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"d3c55f40-0c5e-4410-96fb-704e2dd8a1b3\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI1203 14:16:23.834039 1 controller.go:184] \\\\\\\"Finished reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"d3c55f40-0c5e-4410-96fb-704e2dd8a1b3\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI1203 14:16:40.100151 1 controller.go:170] \\\\\\\"Reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"2f81bd08-216b-41c8-9e36-29e22c0e9fd0\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI1203 14:16:40.100228 1 controller.go:178] \\\\\\\"No control plane machine set found, setting operator status available\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"2f81bd08-216b-41c8-9e36-29e22c0e9fd0\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI1203 14:16:40.100315 1 controller.go:184] \\\\\\\"Finished reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"2f81bd08-216b-41c8-9e36-29e22c0e9fd0\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nE1203 14:17:41.250806 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE1203 14:18:41.252379 1 leaderelection.go:436] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io control-plane-machine-set-leader)\\\\nI1203 14:18:54.248108 1 leaderelection.go:297] failed to renew lease openshift-machine-api/control-plane-machine-set-leader: timed out waiting for the condition\\\\nE1203 14:19:28.253661 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\\\\nE1203 14:19:28.253838 1 main.go:233] \\\\\\\"problem running manager\\\\\\\" err=\\\\\\\"leader election lost\\\\\\\" logger=\\\\\\\"setup\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-03T14:09:00Z\\\"}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/k8s-webhook-server/serving-certs\\\",\\\"name\\\":\\\"control-plane-machine-set-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mk6r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-66f4cc99d4-x278n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.422585 master-0 kubenswrapper[3173]: I1203 14:26:04.422388 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/community-operators-7fwtv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bff18a80-0b0f-40ab-862e-e8b1ab32040a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcqxx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"community-operators-7fwtv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.469074 master-0 kubenswrapper[3173]: I1203 14:26:04.469019 3173 trace.go:236] Trace[157866325]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Dec-2025 14:25:48.384) (total time: 16084ms): Dec 03 14:26:04.469074 master-0 kubenswrapper[3173]: Trace[157866325]: ---"Objects listed" error: 16084ms (14:26:04.468) Dec 03 14:26:04.469074 master-0 kubenswrapper[3173]: Trace[157866325]: [16.084784755s] [16.084784755s] END Dec 03 14:26:04.469074 master-0 kubenswrapper[3173]: I1203 14:26:04.469054 3173 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:04.475423 master-0 kubenswrapper[3173]: I1203 14:26:04.475376 3173 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 14:26:04.492558 master-0 kubenswrapper[3173]: I1203 14:26:04.492492 3173 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:04.547242 master-0 kubenswrapper[3173]: I1203 14:26:04.547186 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04e9e2a5-cdc2-42af-ab2c-49525390be6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dv7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-bbd9b9dff-rrfsm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.576972 master-0 kubenswrapper[3173]: I1203 14:26:04.576887 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98392f8e-0285-4bc3-95a9-d29033639ca3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djxkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djxkd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-6b7bcd6566-jh9m8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:04.577297 master-0 kubenswrapper[3173]: I1203 14:26:04.577133 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.577297 master-0 kubenswrapper[3173]: I1203 14:26:04.577193 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.577297 master-0 kubenswrapper[3173]: I1203 14:26:04.577228 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.577297 master-0 kubenswrapper[3173]: I1203 14:26:04.577282 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:04.577434 master-0 kubenswrapper[3173]: I1203 14:26:04.577312 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.577434 master-0 kubenswrapper[3173]: I1203 14:26:04.577338 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:04.577517 master-0 kubenswrapper[3173]: I1203 14:26:04.577492 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.577564 master-0 kubenswrapper[3173]: I1203 14:26:04.577518 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.577599 master-0 kubenswrapper[3173]: E1203 14:26:04.577576 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:04.577665 master-0 kubenswrapper[3173]: E1203 14:26:04.577587 3173 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:04.577719 master-0 kubenswrapper[3173]: E1203 14:26:04.577675 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.077649088 +0000 UTC m=+25.559026520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:04.577719 master-0 kubenswrapper[3173]: E1203 14:26:04.577692 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:04.577801 master-0 kubenswrapper[3173]: E1203 14:26:04.577729 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:04.577801 master-0 kubenswrapper[3173]: E1203 14:26:04.577776 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.077739221 +0000 UTC m=+25.559116783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:04.577869 master-0 kubenswrapper[3173]: E1203 14:26:04.577814 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.077801873 +0000 UTC m=+25.559179255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:04.577869 master-0 kubenswrapper[3173]: E1203 14:26:04.577781 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:04.578066 master-0 kubenswrapper[3173]: I1203 14:26:04.577863 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:04.578066 master-0 kubenswrapper[3173]: E1203 14:26:04.577878 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.077868505 +0000 UTC m=+25.559245887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:04.578066 master-0 kubenswrapper[3173]: E1203 14:26:04.577941 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.077929926 +0000 UTC m=+25.559307528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:04.578066 master-0 kubenswrapper[3173]: I1203 14:26:04.577968 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.578066 master-0 kubenswrapper[3173]: I1203 14:26:04.578024 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:04.578242 master-0 kubenswrapper[3173]: I1203 14:26:04.578075 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.578242 master-0 kubenswrapper[3173]: I1203 14:26:04.578132 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.578242 master-0 kubenswrapper[3173]: I1203 14:26:04.578162 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.578242 master-0 kubenswrapper[3173]: I1203 14:26:04.578190 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:04.578242 master-0 kubenswrapper[3173]: I1203 14:26:04.578194 3173 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 14:26:04.578842 master-0 kubenswrapper[3173]: E1203 14:26:04.578821 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:04.578902 master-0 kubenswrapper[3173]: E1203 14:26:04.578838 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:04.578945 master-0 kubenswrapper[3173]: E1203 14:26:04.578868 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.078856453 +0000 UTC m=+25.560233835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:04.578994 master-0 kubenswrapper[3173]: E1203 14:26:04.578977 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.078941945 +0000 UTC m=+25.560319327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:04.579058 master-0 kubenswrapper[3173]: I1203 14:26:04.578225 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.579058 master-0 kubenswrapper[3173]: E1203 14:26:04.579023 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:04.579122 master-0 kubenswrapper[3173]: I1203 14:26:04.579072 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.579122 master-0 kubenswrapper[3173]: I1203 14:26:04.579104 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:04.579184 master-0 kubenswrapper[3173]: E1203 14:26:04.579132 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.07910701 +0000 UTC m=+25.560484452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:04.579184 master-0 kubenswrapper[3173]: E1203 14:26:04.579142 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:04.579184 master-0 kubenswrapper[3173]: I1203 14:26:04.579106 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: E1203 14:26:04.579193 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079176632 +0000 UTC m=+25.560554004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: E1203 14:26:04.579147 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: I1203 14:26:04.579220 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: E1203 14:26:04.579233 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079222973 +0000 UTC m=+25.560600355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: I1203 14:26:04.579251 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.579276 master-0 kubenswrapper[3173]: I1203 14:26:04.579273 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579296 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579322 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579339 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: E1203 14:26:04.579346 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579356 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579376 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: E1203 14:26:04.579389 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079378447 +0000 UTC m=+25.560755839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579414 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: E1203 14:26:04.579428 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: I1203 14:26:04.579448 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.579476 master-0 kubenswrapper[3173]: E1203 14:26:04.579451 3173 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579498 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579513 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579516 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079504351 +0000 UTC m=+25.560881813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579564 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579604 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579635 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579663 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079640205 +0000 UTC m=+25.561017667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579697 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079686326 +0000 UTC m=+25.561063798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: E1203 14:26:04.579715 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.079705587 +0000 UTC m=+25.561083089 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579735 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579765 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579789 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:04.579823 master-0 kubenswrapper[3173]: I1203 14:26:04.579815 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579842 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579866 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579889 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579916 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579939 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579963 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.579991 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580040 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580072 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: E1203 14:26:04.580082 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580096 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: E1203 14:26:04.580121 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080110478 +0000 UTC m=+25.561488080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580143 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580176 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: E1203 14:26:04.580200 3173 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580203 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: E1203 14:26:04.580248 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:04.580274 master-0 kubenswrapper[3173]: I1203 14:26:04.580272 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: E1203 14:26:04.580281 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080271663 +0000 UTC m=+25.561649285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580318 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: E1203 14:26:04.580335 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580346 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: E1203 14:26:04.580364 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080355585 +0000 UTC m=+25.561733207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580387 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: E1203 14:26:04.580404 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580422 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580447 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: E1203 14:26:04.580469 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080449238 +0000 UTC m=+25.561826790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580500 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580534 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580561 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580628 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580663 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580691 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580723 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580764 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.580832 master-0 kubenswrapper[3173]: I1203 14:26:04.580139 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580855 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580858 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580872 3173 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.580885 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580893 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08088111 +0000 UTC m=+25.562258692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580932 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580947 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580954 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080937702 +0000 UTC m=+25.562315274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.580975 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.080966012 +0000 UTC m=+25.562343614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581027 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581038 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581062 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081052855 +0000 UTC m=+25.562430427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581077 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081071455 +0000 UTC m=+25.562448837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581079 3173 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581027 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581100 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081088316 +0000 UTC m=+25.562465928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581093 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581143 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081137807 +0000 UTC m=+25.562515189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581134 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581159 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581168 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581193 3173 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581170 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581300 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081289752 +0000 UTC m=+25.562667134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581327 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581329 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581335 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581380 3173 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581382 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581412 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: I1203 14:26:04.581436 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581441 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081399425 +0000 UTC m=+25.562776977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:04.581456 master-0 kubenswrapper[3173]: E1203 14:26:04.581476 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.081464597 +0000 UTC m=+25.562842209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581520 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.581535 3173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581564 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.581583 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08156932 +0000 UTC m=+25.562946852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581607 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581650 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581678 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581711 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581742 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581785 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581821 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.581918 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.581961 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.581965 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08195493 +0000 UTC m=+25.563332532 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582018 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582032 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582072 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.582091 3173 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582108 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.582122 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.082113295 +0000 UTC m=+25.563490677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582144 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582181 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.582218 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582213 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: E1203 14:26:04.582247 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.082238829 +0000 UTC m=+25.563616441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582269 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582302 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582331 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582368 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582397 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582424 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582456 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582486 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582496 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.582487 master-0 kubenswrapper[3173]: I1203 14:26:04.582492 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582557 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582586 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582613 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582629 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582643 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582669 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582678 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.082666201 +0000 UTC m=+25.564043773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582709 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582730 3173 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582751 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582770 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.082754593 +0000 UTC m=+25.564132165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582802 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582828 3173 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582849 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582871 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.082857616 +0000 UTC m=+25.564235198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582829 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.582749 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.582871 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583070 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583106 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583104 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583120 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083105003 +0000 UTC m=+25.564482575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583164 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583221 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083188885 +0000 UTC m=+25.564566267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583181 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583226 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583245 3173 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583258 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583239 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083231677 +0000 UTC m=+25.564609059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583388 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583466 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083411202 +0000 UTC m=+25.564788584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583494 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583550 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: E1203 14:26:04.583503 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083488864 +0000 UTC m=+25.564866466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:04.584156 master-0 kubenswrapper[3173]: I1203 14:26:04.583716 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583754 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583793 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583809 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583829 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583840 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083829314 +0000 UTC m=+25.565206896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583856 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083849504 +0000 UTC m=+25.565226886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583876 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583893 3173 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583905 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583926 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583927 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583934 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083924726 +0000 UTC m=+25.565302108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583975 3173 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.583975 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583980 3173 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584020 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.083992468 +0000 UTC m=+25.565369850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.583998 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584045 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584052 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584059 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08405258 +0000 UTC m=+25.565429962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584074 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584083 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584083 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084075141 +0000 UTC m=+25.565452513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584169 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584196 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584243 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584266 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584284 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584304 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584310 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584324 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: E1203 14:26:04.584339 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084331768 +0000 UTC m=+25.565709140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584360 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584381 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584400 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.585304 master-0 kubenswrapper[3173]: I1203 14:26:04.584420 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584438 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584466 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584479 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584487 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584506 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584512 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084503283 +0000 UTC m=+25.565880765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584526 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584544 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584548 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584563 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584559 3173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584629 3173 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584640 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084620426 +0000 UTC m=+25.565997868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584660 3173 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584585 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584672 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584737 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584797 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584846 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584663 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084652317 +0000 UTC m=+25.566029779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584875 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084868183 +0000 UTC m=+25.566245565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584887 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084880173 +0000 UTC m=+25.566257555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: E1203 14:26:04.584898 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.084892594 +0000 UTC m=+25.566269976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584929 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584949 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584970 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.584989 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585025 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585043 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585062 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585079 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585088 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585098 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585115 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585134 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585151 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:04.587206 master-0 kubenswrapper[3173]: I1203 14:26:04.585170 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585190 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585210 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585227 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585244 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585262 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585280 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585300 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585302 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585317 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585339 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585359 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585358 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585368 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585378 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585390 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585399 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085392228 +0000 UTC m=+25.566769610 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585423 3173 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585425 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585429 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585443 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085437619 +0000 UTC m=+25.566815001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: I1203 14:26:04.585474 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585478 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08547301 +0000 UTC m=+25.566850392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585487 3173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585506 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085494101 +0000 UTC m=+25.566871593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585529 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585544 3173 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585552 3173 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585578 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585592 3173 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585557 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085552082 +0000 UTC m=+25.566929464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585562 3173 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585613 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085605814 +0000 UTC m=+25.566983196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585631 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085623524 +0000 UTC m=+25.567000986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585636 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:04.588655 master-0 kubenswrapper[3173]: E1203 14:26:04.585648 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085640745 +0000 UTC m=+25.567018227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585669 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085657795 +0000 UTC m=+25.567035267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.585681 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585684 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085676836 +0000 UTC m=+25.567054318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585708 3173 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585719 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085707117 +0000 UTC m=+25.567084499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585761 3173 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585781 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.585991 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.085736078 +0000 UTC m=+25.567113560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586053 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086026086 +0000 UTC m=+25.567403548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586080 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586116 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586139 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586191 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586215 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586235 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586240 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586295 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586318 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586357 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586372 3173 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586319 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586394 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086385656 +0000 UTC m=+25.567763038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586411 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586427 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086416797 +0000 UTC m=+25.567794179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586540 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08652725 +0000 UTC m=+25.567904682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586561 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086553091 +0000 UTC m=+25.567930473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586576 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086568781 +0000 UTC m=+25.567946273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586438 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586599 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586618 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086606972 +0000 UTC m=+25.567984354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586635 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: E1203 14:26:04.586654 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:04.589857 master-0 kubenswrapper[3173]: I1203 14:26:04.586659 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.586679 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086670814 +0000 UTC m=+25.568048196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.586695 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086689915 +0000 UTC m=+25.568067297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.586709 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.586808 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.086795988 +0000 UTC m=+25.568173370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586712 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586845 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586875 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586902 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586947 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.586984 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.587018 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587042 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.587048 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.087038955 +0000 UTC m=+25.568416337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587076 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.587091 3173 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587104 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.587116 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.087109217 +0000 UTC m=+25.568486599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.586736 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587133 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: E1203 14:26:04.587157 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.087148498 +0000 UTC m=+25.568525880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587173 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587198 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587228 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587253 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587280 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587303 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587328 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587352 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587377 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587400 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587426 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587453 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587478 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587504 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:04.592174 master-0 kubenswrapper[3173]: I1203 14:26:04.587547 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587575 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587601 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587627 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587629 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587654 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587682 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587710 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587734 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587757 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587789 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587812 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587840 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587866 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587890 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587916 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587930 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587940 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587965 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587987 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.587994 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.588052 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.588080 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.588112 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588140 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588176 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088164677 +0000 UTC m=+25.569542139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588246 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588263 3173 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588275 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088266959 +0000 UTC m=+25.569644431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588292 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.08828443 +0000 UTC m=+25.569661812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588322 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588351 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088338711 +0000 UTC m=+25.569716093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588407 3173 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588421 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: E1203 14:26:04.588436 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088426364 +0000 UTC m=+25.569803746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:04.593670 master-0 kubenswrapper[3173]: I1203 14:26:04.588431 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588453 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088444394 +0000 UTC m=+25.569821866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588525 3173 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588549 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588559 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088549757 +0000 UTC m=+25.569927149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588565 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588605 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088586139 +0000 UTC m=+25.569963591 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588682 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588142 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588742 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.587354 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588771 3173 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588780 3173 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588792 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588813 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088801215 +0000 UTC m=+25.570178707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588832 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.088824325 +0000 UTC m=+25.570201827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588864 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.588892 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588902 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588894 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.588999 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589051 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.589060 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089047062 +0000 UTC m=+25.570424444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589089 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589121 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589146 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589166 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589209 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589208 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589262 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.589269 3173 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589293 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: E1203 14:26:04.589305 3173 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589323 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589333 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589348 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:04.595513 master-0 kubenswrapper[3173]: I1203 14:26:04.589377 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589446 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089436583 +0000 UTC m=+25.570813965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589448 3173 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589488 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589496 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089488284 +0000 UTC m=+25.570865766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589535 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589570 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589600 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589609 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589628 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589663 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589682 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589695 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589732 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089718491 +0000 UTC m=+25.571095943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589504 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589799 3173 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589809 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589820 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589795 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589845 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589857 3173 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589874 3173 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589802 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089785373 +0000 UTC m=+25.571162815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589901 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089893306 +0000 UTC m=+25.571270688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589917 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089911856 +0000 UTC m=+25.571289238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589922 3173 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589930 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089924616 +0000 UTC m=+25.571301998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589946 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089937857 +0000 UTC m=+25.571315239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589958 3173 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589955 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.589962 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.089955967 +0000 UTC m=+25.571333349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.589998 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.590037 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.090023199 +0000 UTC m=+25.571400581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: E1203 14:26:04.590057 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.09004905 +0000 UTC m=+25.571426442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:04.596985 master-0 kubenswrapper[3173]: I1203 14:26:04.590081 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590080 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590131 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590097 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590159 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590195 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590254 3173 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590271 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590300 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.090292207 +0000 UTC m=+25.571669589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590321 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590349 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590366 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590419 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590455 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.090427391 +0000 UTC m=+25.571804773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590459 3173 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590473 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.090465962 +0000 UTC m=+25.571843344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590378 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590488 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.090479922 +0000 UTC m=+25.571857404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590509 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590536 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590565 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.590593 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.590638 3173 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591220 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591339 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.591497 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.091481371 +0000 UTC m=+25.572858753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.591571 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.591621 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.591661 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.091650795 +0000 UTC m=+25.573028177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591698 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: E1203 14:26:04.591721 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.091713437 +0000 UTC m=+25.573090939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591746 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591781 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591810 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591837 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.598294 master-0 kubenswrapper[3173]: I1203 14:26:04.591863 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.591892 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.591922 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.591952 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.591980 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592024 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592057 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592083 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592111 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592142 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592168 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592225 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.592258 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596365 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596441 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596505 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596545 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596576 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596613 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596778 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596821 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596861 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596901 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.596928 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.597143 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.597191 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.597220 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.599382 master-0 kubenswrapper[3173]: I1203 14:26:04.597246 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602658 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602692 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602743 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602758 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602769 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10275043 +0000 UTC m=+25.584127812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602807 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602817 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.102803332 +0000 UTC m=+25.584180704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602828 3173 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602854 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602883 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602889 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.602909 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.102886164 +0000 UTC m=+25.584263546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602930 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.602953 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603026 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603034 3173 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603061 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603075 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.103062149 +0000 UTC m=+25.584439531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603101 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603142 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603181 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603213 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603246 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603294 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603353 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603408 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603516 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603528 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603578 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603629 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603711 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603585 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603691 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: I1203 14:26:04.603292 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603803 3173 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603710 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.103696797 +0000 UTC m=+25.585074179 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:04.606440 master-0 kubenswrapper[3173]: E1203 14:26:04.603846 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.103834721 +0000 UTC m=+25.585212103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.603863 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.103854662 +0000 UTC m=+25.585232044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.603913 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.603980 3173 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604020 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.103970045 +0000 UTC m=+25.585347427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.603914 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: I1203 14:26:04.603925 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604056 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104035397 +0000 UTC m=+25.585412779 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.603929 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604069 3173 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604088 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604119 3173 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604126 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104107199 +0000 UTC m=+25.585484581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604028 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604146 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.1041358 +0000 UTC m=+25.585513182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604192 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10416288 +0000 UTC m=+25.585540262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: I1203 14:26:04.604224 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604233 3173 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604262 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104238462 +0000 UTC m=+25.585615844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604241 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604284 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: I1203 14:26:04.604315 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604364 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104351906 +0000 UTC m=+25.585729298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: I1203 14:26:04.604400 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: I1203 14:26:04.604452 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604406 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604414 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604455 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604462 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104442838 +0000 UTC m=+25.585820220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604527 3173 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604533 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604545 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10452263 +0000 UTC m=+25.585900012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604568 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104557441 +0000 UTC m=+25.585934833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604593 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104579752 +0000 UTC m=+25.585957134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604610 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104600813 +0000 UTC m=+25.585978195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:04.608092 master-0 kubenswrapper[3173]: E1203 14:26:04.604629 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104621883 +0000 UTC m=+25.585999265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.604689 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104647184 +0000 UTC m=+25.586024566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.604712 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.104706716 +0000 UTC m=+25.586084098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604772 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604829 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.604873 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10485735 +0000 UTC m=+25.586234732 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604909 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604941 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604965 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.604995 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605082 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605104 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605129 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605155 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605178 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605203 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605216 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605225 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605331 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.105322363 +0000 UTC m=+25.586699735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605441 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605469 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.105462337 +0000 UTC m=+25.586839719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605507 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605531 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605746 3173 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605753 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605752 3173 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.605777 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605822 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.105811037 +0000 UTC m=+25.587188419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605834 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.605905 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.105888939 +0000 UTC m=+25.587266321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.606016 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10593028 +0000 UTC m=+25.587307662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.606068 3173 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.606125 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: E1203 14:26:04.606145 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.106130056 +0000 UTC m=+25.587507438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:04.610234 master-0 kubenswrapper[3173]: I1203 14:26:04.606219 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606280 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.106228019 +0000 UTC m=+25.587605401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606368 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.106356402 +0000 UTC m=+25.587733774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606384 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606434 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.106422564 +0000 UTC m=+25.587799946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606444 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606524 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606567 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606596 3173 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606601 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606649 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10663886 +0000 UTC m=+25.588016242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606738 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606810 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606845 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.606887 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: E1203 14:26:04.606933 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607073 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607353 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607402 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607432 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607468 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607500 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607537 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607568 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607601 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607637 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607674 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607712 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607743 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607782 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607814 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607853 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607897 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607928 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.607970 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.613336 master-0 kubenswrapper[3173]: I1203 14:26:04.608031 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608077 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608116 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608154 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608181 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608226 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608274 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608318 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608357 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608397 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608434 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608476 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608506 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: E1203 14:26:04.608544 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.108515604 +0000 UTC m=+25.589892996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608685 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: E1203 14:26:04.608716 3173 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608737 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: E1203 14:26:04.608764 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.10875265 +0000 UTC m=+25.590130032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608800 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608839 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608872 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608903 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608935 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608965 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.608989 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609039 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609072 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609104 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609135 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609163 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609196 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609230 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609263 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609298 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609353 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609389 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.614683 master-0 kubenswrapper[3173]: I1203 14:26:04.609433 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609470 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.608843 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609504 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609510 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.109502152 +0000 UTC m=+25.590879534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609541 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609573 3173 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609630 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609683 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609724 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.609770 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609547 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609608 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.608959 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609136 3173 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609185 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609249 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609342 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609422 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609471 3173 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609632 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609684 3173 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.609748 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.610055 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610087 3173 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610151 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.11013737 +0000 UTC m=+25.591514752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610209 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610239 3173 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610280 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110263703 +0000 UTC m=+25.591641085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610326 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110311345 +0000 UTC m=+25.591688727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.608909 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.610448 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610512 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.11049193 +0000 UTC m=+25.591869312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610784 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110770058 +0000 UTC m=+25.592147440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610810 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110799138 +0000 UTC m=+25.592176520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610837 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110823929 +0000 UTC m=+25.592201311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: I1203 14:26:04.610849 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610898 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110884451 +0000 UTC m=+25.592261833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:04.616081 master-0 kubenswrapper[3173]: E1203 14:26:04.610917 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110909342 +0000 UTC m=+25.592286734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.610945 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110926292 +0000 UTC m=+25.592303674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.610952 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.610970 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110958793 +0000 UTC m=+25.592336175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611303 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611381 3173 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611404 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611555 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.611586 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611603 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.110983674 +0000 UTC m=+25.592361056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.611630 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611636 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111627222 +0000 UTC m=+25.593004604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611679 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111664533 +0000 UTC m=+25.593041915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611699 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611708 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111691944 +0000 UTC m=+25.593069326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611729 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111720335 +0000 UTC m=+25.593097717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611746 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111738465 +0000 UTC m=+25.593115847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611771 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111760236 +0000 UTC m=+25.593137618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611790 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111783186 +0000 UTC m=+25.593160568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611820 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111812687 +0000 UTC m=+25.593190069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: E1203 14:26:04.611838 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.111829168 +0000 UTC m=+25.593206550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.612270 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.612780 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.612858 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.613993 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.617825 master-0 kubenswrapper[3173]: I1203 14:26:04.617085 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:04.619528 master-0 kubenswrapper[3173]: I1203 14:26:04.619480 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.700923 master-0 kubenswrapper[3173]: E1203 14:26:04.700327 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:04.700923 master-0 kubenswrapper[3173]: E1203 14:26:04.700375 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.700923 master-0 kubenswrapper[3173]: E1203 14:26:04.700391 3173 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.700923 master-0 kubenswrapper[3173]: E1203 14:26:04.700458 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.20043495 +0000 UTC m=+25.681812332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.704914 master-0 kubenswrapper[3173]: I1203 14:26:04.704863 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.704998 master-0 kubenswrapper[3173]: I1203 14:26:04.704957 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.711062 master-0 kubenswrapper[3173]: I1203 14:26:04.710958 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.711062 master-0 kubenswrapper[3173]: I1203 14:26:04.711045 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.711190 master-0 kubenswrapper[3173]: I1203 14:26:04.711142 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.711250 master-0 kubenswrapper[3173]: I1203 14:26:04.711215 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.711330 master-0 kubenswrapper[3173]: I1203 14:26:04.711307 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.711384 master-0 kubenswrapper[3173]: I1203 14:26:04.711340 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.711384 master-0 kubenswrapper[3173]: I1203 14:26:04.711367 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.711476 master-0 kubenswrapper[3173]: I1203 14:26:04.711393 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.711476 master-0 kubenswrapper[3173]: I1203 14:26:04.711444 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.711578 master-0 kubenswrapper[3173]: I1203 14:26:04.711552 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.711665 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.711697 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.711817 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.712125 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.712166 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.712210 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.712321 master-0 kubenswrapper[3173]: I1203 14:26:04.712320 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.712634 master-0 kubenswrapper[3173]: I1203 14:26:04.712349 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.712634 master-0 kubenswrapper[3173]: I1203 14:26:04.712388 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.712634 master-0 kubenswrapper[3173]: I1203 14:26:04.712500 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.712634 master-0 kubenswrapper[3173]: I1203 14:26:04.712542 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.712634 master-0 kubenswrapper[3173]: I1203 14:26:04.712598 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.712819 master-0 kubenswrapper[3173]: I1203 14:26:04.712718 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.712861 master-0 kubenswrapper[3173]: I1203 14:26:04.712851 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.712905 master-0 kubenswrapper[3173]: I1203 14:26:04.712880 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.712947 master-0 kubenswrapper[3173]: I1203 14:26:04.712920 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.712995 master-0 kubenswrapper[3173]: I1203 14:26:04.712934 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.713066 master-0 kubenswrapper[3173]: I1203 14:26:04.713049 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.713109 master-0 kubenswrapper[3173]: I1203 14:26:04.713089 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.713109 master-0 kubenswrapper[3173]: I1203 14:26:04.713100 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:04.713194 master-0 kubenswrapper[3173]: I1203 14:26:04.713128 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713253 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713303 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713319 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713337 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713370 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713384 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713409 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713427 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.713502 master-0 kubenswrapper[3173]: I1203 14:26:04.713474 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713539 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713555 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713562 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713569 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713581 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713606 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713610 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713620 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713605 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713636 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713657 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713659 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713673 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713672 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713688 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713698 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713712 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713729 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713747 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713757 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713783 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713795 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713785 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713819 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713907 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.713952 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.714163 master-0 kubenswrapper[3173]: I1203 14:26:04.714165 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714238 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714352 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714481 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714501 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714559 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714579 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714582 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714609 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714662 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714675 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714744 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714852 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714856 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714879 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714936 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714937 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714967 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.714993 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715051 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715116 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715160 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715179 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715193 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715249 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715328 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715433 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715534 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715587 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715641 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715680 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715701 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715750 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715774 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715791 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715834 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715853 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715871 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715946 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.715964 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.716057 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.716077 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.772264 master-0 kubenswrapper[3173]: I1203 14:26:04.716189 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716257 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716298 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716334 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716454 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716471 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716567 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716690 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716729 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716750 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716926 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716964 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716974 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716994 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.716999 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717037 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717056 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717100 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717071 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717152 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717178 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717152 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717198 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717152 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717215 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717065 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.717261 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: E1203 14:26:04.719644 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: E1203 14:26:04.719666 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: E1203 14:26:04.719679 3173 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: E1203 14:26:04.719730 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.219713777 +0000 UTC m=+25.701091149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.759605 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.771727 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:04.774840 master-0 kubenswrapper[3173]: I1203 14:26:04.772096 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.777367 master-0 kubenswrapper[3173]: W1203 14:26:04.775026 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ef37bba_85d9_4303_80c0_aac3dc49d3d9.slice/crio-f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530 WatchSource:0}: Error finding container f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530: Status 404 returned error can't find the container with id f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530 Dec 03 14:26:04.777367 master-0 kubenswrapper[3173]: I1203 14:26:04.776196 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.786152 master-0 kubenswrapper[3173]: E1203 14:26:04.786067 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:04.786152 master-0 kubenswrapper[3173]: E1203 14:26:04.786104 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.786152 master-0 kubenswrapper[3173]: E1203 14:26:04.786118 3173 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.786341 master-0 kubenswrapper[3173]: E1203 14:26:04.786186 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.286166352 +0000 UTC m=+25.767543734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.786828 master-0 kubenswrapper[3173]: W1203 14:26:04.786797 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15782f65_35d2_4e95_bf49_81541c683ffe.slice/crio-bd585fc3c73133608057df7a2e69631e3f3aa12537420489646283434c5629df WatchSource:0}: Error finding container bd585fc3c73133608057df7a2e69631e3f3aa12537420489646283434c5629df: Status 404 returned error can't find the container with id bd585fc3c73133608057df7a2e69631e3f3aa12537420489646283434c5629df Dec 03 14:26:04.790956 master-0 kubenswrapper[3173]: E1203 14:26:04.790818 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:04.790956 master-0 kubenswrapper[3173]: E1203 14:26:04.790840 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.790956 master-0 kubenswrapper[3173]: E1203 14:26:04.790851 3173 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.790956 master-0 kubenswrapper[3173]: E1203 14:26:04.790907 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.290892766 +0000 UTC m=+25.772270148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.793962 master-0 kubenswrapper[3173]: I1203 14:26:04.793926 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:04.808753 master-0 kubenswrapper[3173]: E1203 14:26:04.808702 3173 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:04.808753 master-0 kubenswrapper[3173]: E1203 14:26:04.808747 3173 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.808918 master-0 kubenswrapper[3173]: E1203 14:26:04.808762 3173 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.808918 master-0 kubenswrapper[3173]: E1203 14:26:04.808830 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.308808104 +0000 UTC m=+25.790185476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.811280 master-0 kubenswrapper[3173]: E1203 14:26:04.811239 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.811388 master-0 kubenswrapper[3173]: E1203 14:26:04.811294 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.811388 master-0 kubenswrapper[3173]: E1203 14:26:04.811307 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.811388 master-0 kubenswrapper[3173]: E1203 14:26:04.811374 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.311361006 +0000 UTC m=+25.792738388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.855137 master-0 kubenswrapper[3173]: I1203 14:26:04.855027 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:04.858178 master-0 kubenswrapper[3173]: I1203 14:26:04.855396 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:04.866129 master-0 kubenswrapper[3173]: E1203 14:26:04.865785 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.866129 master-0 kubenswrapper[3173]: E1203 14:26:04.865826 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.866129 master-0 kubenswrapper[3173]: E1203 14:26:04.865838 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.866129 master-0 kubenswrapper[3173]: E1203 14:26:04.865916 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.365898513 +0000 UTC m=+25.847275895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.884770 master-0 kubenswrapper[3173]: E1203 14:26:04.882980 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.884770 master-0 kubenswrapper[3173]: E1203 14:26:04.883062 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.884770 master-0 kubenswrapper[3173]: E1203 14:26:04.883085 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.884770 master-0 kubenswrapper[3173]: E1203 14:26:04.883177 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.383143892 +0000 UTC m=+25.864521284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.900156 master-0 kubenswrapper[3173]: E1203 14:26:04.900091 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:04.900156 master-0 kubenswrapper[3173]: E1203 14:26:04.900132 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.900156 master-0 kubenswrapper[3173]: E1203 14:26:04.900150 3173 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.901405 master-0 kubenswrapper[3173]: E1203 14:26:04.900229 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.400206836 +0000 UTC m=+25.881584218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.918290 master-0 kubenswrapper[3173]: E1203 14:26:04.918077 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:04.918290 master-0 kubenswrapper[3173]: E1203 14:26:04.918127 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.918290 master-0 kubenswrapper[3173]: E1203 14:26:04.918145 3173 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.918290 master-0 kubenswrapper[3173]: E1203 14:26:04.918231 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.418203786 +0000 UTC m=+25.899581168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.939992 master-0 kubenswrapper[3173]: E1203 14:26:04.939844 3173 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.939992 master-0 kubenswrapper[3173]: E1203 14:26:04.939886 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.939992 master-0 kubenswrapper[3173]: E1203 14:26:04.939954 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.439931332 +0000 UTC m=+25.921308704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.972300 master-0 kubenswrapper[3173]: E1203 14:26:04.972254 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:04.972300 master-0 kubenswrapper[3173]: E1203 14:26:04.972295 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:04.972404 master-0 kubenswrapper[3173]: E1203 14:26:04.972309 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.972404 master-0 kubenswrapper[3173]: E1203 14:26:04.972383 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.472359062 +0000 UTC m=+25.953736444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997150 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997186 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997217 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997186 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997194 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997246 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997296 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997301 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: E1203 14:26:04.997307 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997326 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997301 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997346 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997368 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997348 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997348 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: E1203 14:26:04.997457 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997473 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997701 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997706 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997735 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997744 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997754 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997735 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997777 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997788 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997797 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997798 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997827 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997832 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997810 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997831 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997880 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997884 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997846 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997813 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997928 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997945 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997899 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997959 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997903 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997847 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997886 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997999 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:04.997956 master-0 kubenswrapper[3173]: I1203 14:26:04.997930 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.997886 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.998072 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.998082 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.997919 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.997922 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998024 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.997962 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.997890 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: I1203 14:26:04.998151 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998166 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998254 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998335 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998427 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998704 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998780 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998867 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998935 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.998996 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.999107 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.999251 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.999346 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.999441 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:04.999568 master-0 kubenswrapper[3173]: E1203 14:26:04.999579 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:04.999673 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:04.999728 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:04.999805 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:04.999862 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:04.999935 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:05.000045 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:05.000110 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:05.000180 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:05.000288 master-0 kubenswrapper[3173]: E1203 14:26:05.000236 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:05.000546 master-0 kubenswrapper[3173]: E1203 14:26:05.000309 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:05.000546 master-0 kubenswrapper[3173]: E1203 14:26:05.000361 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:05.000546 master-0 kubenswrapper[3173]: E1203 14:26:05.000426 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:05.000546 master-0 kubenswrapper[3173]: E1203 14:26:05.000514 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:05.000658 master-0 kubenswrapper[3173]: E1203 14:26:05.000595 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:05.000691 master-0 kubenswrapper[3173]: E1203 14:26:05.000667 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:05.000835 master-0 kubenswrapper[3173]: E1203 14:26:05.000742 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:05.000835 master-0 kubenswrapper[3173]: E1203 14:26:05.000803 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:05.000927 master-0 kubenswrapper[3173]: E1203 14:26:05.000871 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:05.000962 master-0 kubenswrapper[3173]: E1203 14:26:05.000924 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:05.001223 master-0 kubenswrapper[3173]: E1203 14:26:05.001029 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:05.001223 master-0 kubenswrapper[3173]: E1203 14:26:05.001130 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:05.001223 master-0 kubenswrapper[3173]: E1203 14:26:05.001203 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:05.001350 master-0 kubenswrapper[3173]: E1203 14:26:05.001276 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:05.001399 master-0 kubenswrapper[3173]: E1203 14:26:05.001372 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:05.001555 master-0 kubenswrapper[3173]: E1203 14:26:05.001469 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:05.001555 master-0 kubenswrapper[3173]: E1203 14:26:05.001529 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:05.001636 master-0 kubenswrapper[3173]: E1203 14:26:05.001594 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:05.001677 master-0 kubenswrapper[3173]: E1203 14:26:05.001654 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:05.001931 master-0 kubenswrapper[3173]: E1203 14:26:05.001746 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:05.001931 master-0 kubenswrapper[3173]: E1203 14:26:05.001905 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:05.002092 master-0 kubenswrapper[3173]: E1203 14:26:05.002063 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:05.045142 master-0 kubenswrapper[3173]: I1203 14:26:05.045065 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:05.057355 master-0 kubenswrapper[3173]: W1203 14:26:05.057296 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6935a3f8_723e_46e6_8498_483f34bf0825.slice/crio-95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4 WatchSource:0}: Error finding container 95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4: Status 404 returned error can't find the container with id 95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4 Dec 03 14:26:05.117985 master-0 kubenswrapper[3173]: E1203 14:26:05.117901 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.117985 master-0 kubenswrapper[3173]: E1203 14:26:05.117944 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.117985 master-0 kubenswrapper[3173]: E1203 14:26:05.117962 3173 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.118472 master-0 kubenswrapper[3173]: E1203 14:26:05.118047 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.618023443 +0000 UTC m=+26.099400825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.120431 master-0 kubenswrapper[3173]: E1203 14:26:05.120407 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:05.120431 master-0 kubenswrapper[3173]: E1203 14:26:05.120427 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.120534 master-0 kubenswrapper[3173]: E1203 14:26:05.120438 3173 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.120534 master-0 kubenswrapper[3173]: E1203 14:26:05.120475 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.620466122 +0000 UTC m=+26.101843504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120638 3173 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120656 3173 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120665 3173 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120693 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.620684968 +0000 UTC m=+26.102062350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120868 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120886 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120897 3173 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: E1203 14:26:05.120938 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.620923895 +0000 UTC m=+26.102301277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.121146 master-0 kubenswrapper[3173]: I1203 14:26:05.121094 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:05.121817 master-0 kubenswrapper[3173]: I1203 14:26:05.121774 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:05.122704 master-0 kubenswrapper[3173]: I1203 14:26:05.122658 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"9b5b765f0d18e2987c8c74b07747c419b7e8f87f50ce5aefb0e8d171683ce30d"} Dec 03 14:26:05.122704 master-0 kubenswrapper[3173]: I1203 14:26:05.122700 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" event={"ID":"15782f65-35d2-4e95-bf49-81541c683ffe","Type":"ContainerStarted","Data":"bd585fc3c73133608057df7a2e69631e3f3aa12537420489646283434c5629df"} Dec 03 14:26:05.123779 master-0 kubenswrapper[3173]: I1203 14:26:05.123555 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530"} Dec 03 14:26:05.123779 master-0 kubenswrapper[3173]: I1203 14:26:05.123687 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:05.124880 master-0 kubenswrapper[3173]: I1203 14:26:05.124780 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4"} Dec 03 14:26:05.124880 master-0 kubenswrapper[3173]: I1203 14:26:05.124807 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:05.130713 master-0 kubenswrapper[3173]: I1203 14:26:05.130674 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:05.130781 master-0 kubenswrapper[3173]: I1203 14:26:05.130718 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.130781 master-0 kubenswrapper[3173]: I1203 14:26:05.130744 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:05.130781 master-0 kubenswrapper[3173]: I1203 14:26:05.130769 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:05.130937 master-0 kubenswrapper[3173]: I1203 14:26:05.130793 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:05.130937 master-0 kubenswrapper[3173]: I1203 14:26:05.130828 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.130937 master-0 kubenswrapper[3173]: E1203 14:26:05.130833 3173 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:05.130937 master-0 kubenswrapper[3173]: E1203 14:26:05.130900 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:05.130937 master-0 kubenswrapper[3173]: E1203 14:26:05.130923 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.130896608 +0000 UTC m=+26.612273990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.130950 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131044 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131073 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: I1203 14:26:05.130846 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.130951 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.130941189 +0000 UTC m=+26.612318791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131138 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131152 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131130884 +0000 UTC m=+26.612508456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131170 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131162805 +0000 UTC m=+26.612540187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131186 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131179486 +0000 UTC m=+26.612557108 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: I1203 14:26:05.131208 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.131222 master-0 kubenswrapper[3173]: E1203 14:26:05.131223 3173 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131256 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131246328 +0000 UTC m=+26.612623920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131288 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131318 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131311039 +0000 UTC m=+26.612688431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131335 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13132767 +0000 UTC m=+26.612705062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131312 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131363 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131374 3173 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131384 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131423 3173 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131426 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131412152 +0000 UTC m=+26.612789754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131469 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131490 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131485184 +0000 UTC m=+26.612862566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131485 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131531 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131566 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131540146 +0000 UTC m=+26.612917528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131591 3173 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131612 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131630 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131617588 +0000 UTC m=+26.612995220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131660 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131683 3173 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: I1203 14:26:05.131702 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:05.131686 master-0 kubenswrapper[3173]: E1203 14:26:05.131716 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131707581 +0000 UTC m=+26.613085163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.131738 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131767 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.131772 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131807 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131794233 +0000 UTC m=+26.613171855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131846 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.131856 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131886 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131875485 +0000 UTC m=+26.613253047 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131908 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.131911 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131923 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.131982 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.131993 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.131981628 +0000 UTC m=+26.613359190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132044 3173 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132060 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132047 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132090 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132115 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13205073 +0000 UTC m=+26.613428142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132100 3173 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132178 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132156833 +0000 UTC m=+26.613534435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132216 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132204695 +0000 UTC m=+26.613582077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132235 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132227455 +0000 UTC m=+26.613604837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132288 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132306 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132290327 +0000 UTC m=+26.613667869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132341 3173 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132370 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132362629 +0000 UTC m=+26.613740011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132369 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132409 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132475 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132512 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132412 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132549 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: I1203 14:26:05.132566 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132591 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132584275 +0000 UTC m=+26.613961657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:05.132719 master-0 kubenswrapper[3173]: E1203 14:26:05.132632 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132610736 +0000 UTC m=+26.613988128 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132666 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132708 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132695229 +0000 UTC m=+26.614072791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132752 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132781 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132801 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132826 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132834 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132827902 +0000 UTC m=+26.614205284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132872 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132928 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132877 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.132965 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132983 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132975397 +0000 UTC m=+26.614352779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133045 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133054 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133060 3173 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133084 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133096 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13308328 +0000 UTC m=+26.614460842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132439 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133126 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13311341 +0000 UTC m=+26.614491002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133133 3173 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132873 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133163 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133150382 +0000 UTC m=+26.614527774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132491 3173 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132938 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133188 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133177802 +0000 UTC m=+26.614555204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.132632 3173 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133221 3173 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133231 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133222514 +0000 UTC m=+26.614599896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133260 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133250944 +0000 UTC m=+26.614628526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133309 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133340 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133389 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133412 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: I1203 14:26:05.133434 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133444 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13343649 +0000 UTC m=+26.614813872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:05.134138 master-0 kubenswrapper[3173]: E1203 14:26:05.133484 3173 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133517 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133478671 +0000 UTC m=+26.614856203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133531 3173 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133533 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133561 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133548333 +0000 UTC m=+26.614925955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133598 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133633 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133639 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133665 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133655596 +0000 UTC m=+26.615032978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133691 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133676226 +0000 UTC m=+26.615053628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133707 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133723 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133716398 +0000 UTC m=+26.615093790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133747 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133737348 +0000 UTC m=+26.615114960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133769 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133758919 +0000 UTC m=+26.615136531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133792 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133781869 +0000 UTC m=+26.615159491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133816 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133855 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133888 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133907 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133937 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.133929904 +0000 UTC m=+26.615307286 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133932 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133970 3173 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133987 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133993 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.133994 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.134048 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134033697 +0000 UTC m=+26.615411089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.133971 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.134074 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134064377 +0000 UTC m=+26.615441779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.134112 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.134151 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.134165 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: I1203 14:26:05.134189 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.135509 master-0 kubenswrapper[3173]: E1203 14:26:05.134209 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134197271 +0000 UTC m=+26.615574653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134232 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134224612 +0000 UTC m=+26.615601994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134245 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134240362 +0000 UTC m=+26.615617744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134253 3173 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134263 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134277 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134319 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134320 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134296064 +0000 UTC m=+26.615673636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134370 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134393 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134414 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134399347 +0000 UTC m=+26.615776889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134434 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134463 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134467 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134430618 +0000 UTC m=+26.615808220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134506 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134541 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134572 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134604 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134638 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134677 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134702 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134710 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134715 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134759 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134723 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134714366 +0000 UTC m=+26.616091748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134766 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134786 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134776018 +0000 UTC m=+26.616153400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134788 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134799 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134801 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134793418 +0000 UTC m=+26.616170800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134704 3173 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134881 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13486449 +0000 UTC m=+26.616241912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: I1203 14:26:05.134935 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134950 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134970 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134957203 +0000 UTC m=+26.616334585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:05.136879 master-0 kubenswrapper[3173]: E1203 14:26:05.134995 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.134987574 +0000 UTC m=+26.616364956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135055 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.135046375 +0000 UTC m=+26.616423957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135074 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.135065366 +0000 UTC m=+26.616442748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135085 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.135081056 +0000 UTC m=+26.616458438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135110 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135191 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135212 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135241 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135264 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135312 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135333 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135354 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135374 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135397 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135421 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135448 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135493 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135515 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135549 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135586 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135609 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135633 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135653 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.135676 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135585 3173 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135762 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135764 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135775 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135778 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135789 3173 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135789 3173 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135602 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: W1203 14:26:05.135614 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b681889_eb2c_41fb_a1dc_69b99227b45b.slice/crio-ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764 WatchSource:0}: Error finding container ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764: Status 404 returned error can't find the container with id ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764 Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135651 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135640 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135678 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135695 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135696 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135701 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135712 3173 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135718 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.135699384 +0000 UTC m=+26.617076946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135734 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: I1203 14:26:05.136161 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:05.138256 master-0 kubenswrapper[3173]: E1203 14:26:05.135729 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136225 3173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.135740 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.135740 3173 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.135740 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.135684 3173 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136188 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136175727 +0000 UTC m=+26.617553109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: I1203 14:26:05.136368 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136426 3173 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136452 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136443885 +0000 UTC m=+26.617821267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: I1203 14:26:05.136449 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136511 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136496416 +0000 UTC m=+26.617873958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136538 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136526827 +0000 UTC m=+26.617904429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136556 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136546658 +0000 UTC m=+26.617924280 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136578 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136568118 +0000 UTC m=+26.617945711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136514 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136596 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136586309 +0000 UTC m=+26.617963921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136641 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13662131 +0000 UTC m=+26.617998912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136665 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136655821 +0000 UTC m=+26.618033443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136678 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136671901 +0000 UTC m=+26.618049503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136720 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136711313 +0000 UTC m=+26.618088935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136746 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136737183 +0000 UTC m=+26.618114795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136773 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136763234 +0000 UTC m=+26.618140846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136803 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136789995 +0000 UTC m=+26.618167607 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136822 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136812925 +0000 UTC m=+26.618190527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136839 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136831496 +0000 UTC m=+26.618209098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136854 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136846516 +0000 UTC m=+26.618224248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:05.140273 master-0 kubenswrapper[3173]: E1203 14:26:05.136869 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136863077 +0000 UTC m=+26.618240689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.136892 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136885007 +0000 UTC m=+26.618262619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.136914 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136909128 +0000 UTC m=+26.618286740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.136930 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136924419 +0000 UTC m=+26.618302031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.136943 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136937709 +0000 UTC m=+26.618315311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.136958 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.136952629 +0000 UTC m=+26.618330241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137027 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137069 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137084 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137121 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137130 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137119094 +0000 UTC m=+26.618496666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137179 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137184 3173 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137229 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137261 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137252198 +0000 UTC m=+26.618629790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137230 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137276 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137269718 +0000 UTC m=+26.618647320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137216 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137285 3173 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137298 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137313 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137305569 +0000 UTC m=+26.618683161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137331 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13732317 +0000 UTC m=+26.618700552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137353 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137378 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137400 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137423 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137445 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137468 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137472 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137481 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: I1203 14:26:05.137492 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137517 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137506085 +0000 UTC m=+26.618883467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137515 3173 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137549 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137529 3173 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:05.141439 master-0 kubenswrapper[3173]: E1203 14:26:05.137561 3173 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137562 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137572 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137563317 +0000 UTC m=+26.618940689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137593 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137579447 +0000 UTC m=+26.618957029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137615 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137607718 +0000 UTC m=+26.618985310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137618 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137616 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137643 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137636999 +0000 UTC m=+26.619014381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137645 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137684 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13766851 +0000 UTC m=+26.619046102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137698 3173 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137711 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137736 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137729421 +0000 UTC m=+26.619106803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137731 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137760 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137751022 +0000 UTC m=+26.619128394 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137764 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137788 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137782763 +0000 UTC m=+26.619160145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137792 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137817 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137841 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137836114 +0000 UTC m=+26.619213486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137848 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137858 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137848095 +0000 UTC m=+26.619225477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137918 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137881 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.137977 3173 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.137967 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.138021 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.137992879 +0000 UTC m=+26.619370261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.138046 3173 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.138074 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138060481 +0000 UTC m=+26.619438033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.138103 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.138129 3173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: I1203 14:26:05.138142 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:05.142836 master-0 kubenswrapper[3173]: E1203 14:26:05.138153 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138147913 +0000 UTC m=+26.619525295 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138179 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138163204 +0000 UTC m=+26.619540816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138184 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138218 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138211905 +0000 UTC m=+26.619589517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138217 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138271 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138304 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138318 3173 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138356 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138375 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13836877 +0000 UTC m=+26.619746152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138348 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138405 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138424 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138437 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138448 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138442002 +0000 UTC m=+26.619819384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138469 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138489 3173 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138495 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138500 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138514 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138507113 +0000 UTC m=+26.619884495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138531 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138537 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138527024 +0000 UTC m=+26.619904406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138544 3173 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138556 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138561 3173 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138602 3173 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138606 3173 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138606 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138566 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138559615 +0000 UTC m=+26.619936997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138639 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138633907 +0000 UTC m=+26.620011289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138650 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138645747 +0000 UTC m=+26.620023129 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: I1203 14:26:05.138696 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138715 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138698959 +0000 UTC m=+26.620076501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138736 3173 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:05.144176 master-0 kubenswrapper[3173]: E1203 14:26:05.138746 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13873123 +0000 UTC m=+26.620108822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138770 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138758951 +0000 UTC m=+26.620136543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.138811 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.138858 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.138906 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138909 3173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138937 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.138941 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138955 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138962 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138955396 +0000 UTC m=+26.620332778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.138991 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.138972927 +0000 UTC m=+26.620350489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139034 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139043 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139028808 +0000 UTC m=+26.620406410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139049 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139072 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139098 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139115 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13909943 +0000 UTC m=+26.620477022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139175 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139177 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139203 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139192113 +0000 UTC m=+26.620569495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139222 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139214343 +0000 UTC m=+26.620591715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139243 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139258 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139268 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139297 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139311 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139296776 +0000 UTC m=+26.620674368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139333 3173 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139333 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139349 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139358 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139352717 +0000 UTC m=+26.620730099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139400 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139410 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139395889 +0000 UTC m=+26.620773491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139368 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139452 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13944536 +0000 UTC m=+26.620822742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: I1203 14:26:05.139450 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:05.145428 master-0 kubenswrapper[3173]: E1203 14:26:05.139480 3173 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139502 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139513 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139498312 +0000 UTC m=+26.620875694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139537 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139532352 +0000 UTC m=+26.620909734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139554 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139588 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139599 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139587824 +0000 UTC m=+26.620965406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139558 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139620 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139609755 +0000 UTC m=+26.620987137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139673 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139735 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139783 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.139764649 +0000 UTC m=+26.621142261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139826 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139866 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139916 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139950 3173 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140026 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.139994 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140054 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140033 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140025816 +0000 UTC m=+26.621403198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.139988 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140143 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.14013323 +0000 UTC m=+26.621510612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140197 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140254 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140239853 +0000 UTC m=+26.621617455 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140284 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140287 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140278254 +0000 UTC m=+26.621655836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140313 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140307114 +0000 UTC m=+26.621684486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140334 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140358 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140381 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140403 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140434 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140452 3173 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: I1203 14:26:05.140455 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:05.147461 master-0 kubenswrapper[3173]: E1203 14:26:05.140468 3173 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140471 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140504 3173 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140518 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140523 3173 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140497 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140484609 +0000 UTC m=+26.621862191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140619 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140627 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140615883 +0000 UTC m=+26.621993265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140659 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140649314 +0000 UTC m=+26.622026696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140669 3173 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140677 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140670535 +0000 UTC m=+26.622047917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140691 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140684825 +0000 UTC m=+26.622062207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140709 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140701866 +0000 UTC m=+26.622079248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140726 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140776 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140811 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140797568 +0000 UTC m=+26.622174960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140834 3173 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140841 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140876 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140888 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140872921 +0000 UTC m=+26.622250473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140922 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140950 3173 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140957 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.140982 3173 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141020 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.140997614 +0000 UTC m=+26.622374996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141045 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141034745 +0000 UTC m=+26.622412317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141046 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141075 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141060636 +0000 UTC m=+26.622438198 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.140989 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141109 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141096497 +0000 UTC m=+26.622474099 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141079 3173 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141137 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: E1203 14:26:05.141148 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.148781 master-0 kubenswrapper[3173]: I1203 14:26:05.141156 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141180 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141172399 +0000 UTC m=+26.622549991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141215 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141246 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141265 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141296 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141284632 +0000 UTC m=+26.622662224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141312 3173 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141338 3173 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141339 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141354 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141343194 +0000 UTC m=+26.622720776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141375 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141365304 +0000 UTC m=+26.622742876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141423 3173 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141432 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141463 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141451917 +0000 UTC m=+26.622829529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141479 3173 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141499 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141510 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141501388 +0000 UTC m=+26.622878980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141557 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141609 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141568 3173 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141653 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141659 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141650093 +0000 UTC m=+26.623027675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141713 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141749 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141740855 +0000 UTC m=+26.623118437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141710 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141794 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141755 3173 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141845 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141861 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.141849768 +0000 UTC m=+26.623227380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141902 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141920 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141935 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.14192662 +0000 UTC m=+26.623304192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: I1203 14:26:05.141962 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.141980 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:05.149940 master-0 kubenswrapper[3173]: E1203 14:26:05.142032 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.142019913 +0000 UTC m=+26.623397475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: I1203 14:26:05.142026 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: I1203 14:26:05.142092 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: I1203 14:26:05.142129 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.142097 3173 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143229 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143214527 +0000 UTC m=+26.624591909 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143240 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.141616 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143310 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143289629 +0000 UTC m=+26.624667011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.141815 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143326 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.14331809 +0000 UTC m=+26.624695462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.142162 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143388 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143357401 +0000 UTC m=+26.624734783 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143415 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143405462 +0000 UTC m=+26.624782844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143427 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143458 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143452264 +0000 UTC m=+26.624829646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143549 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:05.151066 master-0 kubenswrapper[3173]: E1203 14:26:05.143611 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.143592378 +0000 UTC m=+26.624969760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:05.157286 master-0 kubenswrapper[3173]: W1203 14:26:05.157224 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb71ac8a5_987d_4eba_8bc0_a091f0a0de16.slice/crio-7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286 WatchSource:0}: Error finding container 7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286: Status 404 returned error can't find the container with id 7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286 Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: I1203 14:26:05.246341 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: I1203 14:26:05.246535 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: E1203 14:26:05.246541 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: E1203 14:26:05.246581 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: E1203 14:26:05.246598 3173 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.246719 master-0 kubenswrapper[3173]: E1203 14:26:05.246667 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.24664534 +0000 UTC m=+26.728022722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.247226 master-0 kubenswrapper[3173]: E1203 14:26:05.246744 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.247226 master-0 kubenswrapper[3173]: E1203 14:26:05.246770 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.247226 master-0 kubenswrapper[3173]: E1203 14:26:05.246788 3173 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.247226 master-0 kubenswrapper[3173]: E1203 14:26:05.246862 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.246841216 +0000 UTC m=+26.728218628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.328227 master-0 kubenswrapper[3173]: I1203 14:26:05.328156 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:05.339317 master-0 kubenswrapper[3173]: W1203 14:26:05.339261 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c2a40b_213c_42f1_9459_87c2e780a75f.slice/crio-886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751 WatchSource:0}: Error finding container 886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751: Status 404 returned error can't find the container with id 886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751 Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: I1203 14:26:05.351499 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: I1203 14:26:05.351658 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: I1203 14:26:05.351733 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: I1203 14:26:05.351853 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353122 3173 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353160 3173 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353176 3173 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353255 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.353230813 +0000 UTC m=+26.834608185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353310 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353358 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353375 3173 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353379 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353392 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353401 3173 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353432 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.353423328 +0000 UTC m=+26.834800710 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353451 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.353443949 +0000 UTC m=+26.834821331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353511 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353524 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353536 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.355023 master-0 kubenswrapper[3173]: E1203 14:26:05.353576 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.353566992 +0000 UTC m=+26.834944594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.357982 master-0 kubenswrapper[3173]: E1203 14:26:05.357568 3173 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:05.357982 master-0 kubenswrapper[3173]: E1203 14:26:05.357612 3173 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.357982 master-0 kubenswrapper[3173]: E1203 14:26:05.357628 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.357982 master-0 kubenswrapper[3173]: E1203 14:26:05.357700 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.857676799 +0000 UTC m=+26.339054241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.358820 master-0 kubenswrapper[3173]: E1203 14:26:05.358600 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.358820 master-0 kubenswrapper[3173]: E1203 14:26:05.358629 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.358820 master-0 kubenswrapper[3173]: E1203 14:26:05.358643 3173 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.359290 master-0 kubenswrapper[3173]: I1203 14:26:05.359247 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:05.359360 master-0 kubenswrapper[3173]: E1203 14:26:05.359331 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.859277604 +0000 UTC m=+26.340654986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.359487 master-0 kubenswrapper[3173]: I1203 14:26:05.359435 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:05.359487 master-0 kubenswrapper[3173]: I1203 14:26:05.359457 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:05.361663 master-0 kubenswrapper[3173]: I1203 14:26:05.361193 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:05.361663 master-0 kubenswrapper[3173]: I1203 14:26:05.361367 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:05.362548 master-0 kubenswrapper[3173]: E1203 14:26:05.362067 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:05.362548 master-0 kubenswrapper[3173]: E1203 14:26:05.362118 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.362548 master-0 kubenswrapper[3173]: E1203 14:26:05.362146 3173 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.362548 master-0 kubenswrapper[3173]: E1203 14:26:05.362249 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.862212817 +0000 UTC m=+26.343590379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.364044 master-0 kubenswrapper[3173]: I1203 14:26:05.363982 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:05.366967 master-0 kubenswrapper[3173]: E1203 14:26:05.366820 3173 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:05.366967 master-0 kubenswrapper[3173]: E1203 14:26:05.366850 3173 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.366967 master-0 kubenswrapper[3173]: E1203 14:26:05.366865 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.366967 master-0 kubenswrapper[3173]: E1203 14:26:05.366924 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.866905991 +0000 UTC m=+26.348283383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.368047 master-0 kubenswrapper[3173]: E1203 14:26:05.367622 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.368047 master-0 kubenswrapper[3173]: E1203 14:26:05.367661 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.368047 master-0 kubenswrapper[3173]: E1203 14:26:05.367680 3173 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.368047 master-0 kubenswrapper[3173]: E1203 14:26:05.367781 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.867752964 +0000 UTC m=+26.349130516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.370606 master-0 kubenswrapper[3173]: I1203 14:26:05.370571 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:05.397549 master-0 kubenswrapper[3173]: I1203 14:26:05.397491 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:05.405381 master-0 kubenswrapper[3173]: I1203 14:26:05.405268 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:05.411996 master-0 kubenswrapper[3173]: W1203 14:26:05.411949 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c00a797_4c60_43dd_bd04_16b2c6f1b6a8.slice/crio-ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f WatchSource:0}: Error finding container ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f: Status 404 returned error can't find the container with id ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f Dec 03 14:26:05.418884 master-0 kubenswrapper[3173]: W1203 14:26:05.418849 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799e819f_f4b2_4ac9_8fa4_7d4da7a79285.slice/crio-30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976 WatchSource:0}: Error finding container 30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976: Status 404 returned error can't find the container with id 30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976 Dec 03 14:26:05.444477 master-0 kubenswrapper[3173]: E1203 14:26:05.444431 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:05.444588 master-0 kubenswrapper[3173]: E1203 14:26:05.444485 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.444588 master-0 kubenswrapper[3173]: E1203 14:26:05.444507 3173 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.444652 master-0 kubenswrapper[3173]: E1203 14:26:05.444624 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.944588733 +0000 UTC m=+26.425966125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.444829 3173 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.444881 3173 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.444902 3173 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.444970 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.444995 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.944965144 +0000 UTC m=+26.426342526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.445013 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.445039 3173 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.445126 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.945100018 +0000 UTC m=+26.426477400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.446188 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.446210 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.446225 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.446911 master-0 kubenswrapper[3173]: E1203 14:26:05.446283 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.946269231 +0000 UTC m=+26.427646823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.448546 master-0 kubenswrapper[3173]: E1203 14:26:05.448443 3173 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.448546 master-0 kubenswrapper[3173]: E1203 14:26:05.448464 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.448546 master-0 kubenswrapper[3173]: E1203 14:26:05.448501 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.948491894 +0000 UTC m=+26.429869276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.459776 master-0 kubenswrapper[3173]: I1203 14:26:05.459625 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:05.459776 master-0 kubenswrapper[3173]: I1203 14:26:05.459715 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:05.459776 master-0 kubenswrapper[3173]: I1203 14:26:05.459750 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: I1203 14:26:05.459895 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: I1203 14:26:05.459942 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: E1203 14:26:05.459985 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: E1203 14:26:05.460048 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: E1203 14:26:05.460064 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460175 master-0 kubenswrapper[3173]: E1203 14:26:05.460135 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.460113864 +0000 UTC m=+26.941491246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460374 master-0 kubenswrapper[3173]: E1203 14:26:05.460296 3173 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460374 master-0 kubenswrapper[3173]: E1203 14:26:05.460328 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460436 master-0 kubenswrapper[3173]: E1203 14:26:05.460398 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460436 master-0 kubenswrapper[3173]: E1203 14:26:05.460425 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.460505 master-0 kubenswrapper[3173]: E1203 14:26:05.460440 3173 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460505 master-0 kubenswrapper[3173]: E1203 14:26:05.460469 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460505 master-0 kubenswrapper[3173]: E1203 14:26:05.460490 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.460505 master-0 kubenswrapper[3173]: E1203 14:26:05.460405 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.460379091 +0000 UTC m=+26.941756683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460505 master-0 kubenswrapper[3173]: E1203 14:26:05.460502 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460551 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.460533876 +0000 UTC m=+26.941911458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460573 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460619 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460634 3173 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460696 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.46067981 +0000 UTC m=+26.942057412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.460747 master-0 kubenswrapper[3173]: E1203 14:26:05.460732 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.460721581 +0000 UTC m=+26.942099183 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.465803 master-0 kubenswrapper[3173]: E1203 14:26:05.465663 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.465803 master-0 kubenswrapper[3173]: E1203 14:26:05.465693 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.465803 master-0 kubenswrapper[3173]: E1203 14:26:05.465703 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.465803 master-0 kubenswrapper[3173]: E1203 14:26:05.465751 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.965738673 +0000 UTC m=+26.447116055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.492549 master-0 kubenswrapper[3173]: E1203 14:26:05.492504 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.492549 master-0 kubenswrapper[3173]: E1203 14:26:05.492544 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.492706 master-0 kubenswrapper[3173]: E1203 14:26:05.492559 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.492706 master-0 kubenswrapper[3173]: E1203 14:26:05.492640 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:05.992617455 +0000 UTC m=+26.473994827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.499893 master-0 kubenswrapper[3173]: E1203 14:26:05.499849 3173 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:05.519347 master-0 kubenswrapper[3173]: E1203 14:26:05.519300 3173 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.519465 master-0 kubenswrapper[3173]: E1203 14:26:05.519359 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.519465 master-0 kubenswrapper[3173]: E1203 14:26:05.519449 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.019404385 +0000 UTC m=+26.500781767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.523298 master-0 kubenswrapper[3173]: E1203 14:26:05.523201 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.523298 master-0 kubenswrapper[3173]: E1203 14:26:05.523242 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.523298 master-0 kubenswrapper[3173]: E1203 14:26:05.523278 3173 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.523465 master-0 kubenswrapper[3173]: E1203 14:26:05.523370 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.023349607 +0000 UTC m=+26.504726989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.566577 master-0 kubenswrapper[3173]: I1203 14:26:05.566512 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:05.567844 master-0 kubenswrapper[3173]: E1203 14:26:05.567815 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.567844 master-0 kubenswrapper[3173]: E1203 14:26:05.567840 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.567969 master-0 kubenswrapper[3173]: E1203 14:26:05.567852 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.567969 master-0 kubenswrapper[3173]: E1203 14:26:05.567894 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.5678812 +0000 UTC m=+27.049258582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.604401 master-0 kubenswrapper[3173]: I1203 14:26:05.604067 3173 request.go:700] Waited for 1.013837661s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token Dec 03 14:26:05.622836 master-0 kubenswrapper[3173]: I1203 14:26:05.622674 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"77430348-b53a-4898-8047-be8bb542a0a7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wm96f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-txl6b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:05.632058 master-0 kubenswrapper[3173]: E1203 14:26:05.631929 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.632058 master-0 kubenswrapper[3173]: E1203 14:26:05.631973 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.632058 master-0 kubenswrapper[3173]: E1203 14:26:05.631990 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.632218 master-0 kubenswrapper[3173]: E1203 14:26:05.632064 3173 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:05.632218 master-0 kubenswrapper[3173]: E1203 14:26:05.632097 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.632218 master-0 kubenswrapper[3173]: E1203 14:26:05.632104 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.13207956 +0000 UTC m=+26.613456942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.632218 master-0 kubenswrapper[3173]: E1203 14:26:05.632113 3173 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.632218 master-0 kubenswrapper[3173]: E1203 14:26:05.632185 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.132163883 +0000 UTC m=+26.613541325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.636748 master-0 kubenswrapper[3173]: I1203 14:26:05.636706 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:05.637432 master-0 kubenswrapper[3173]: I1203 14:26:05.637394 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:05.670506 master-0 kubenswrapper[3173]: I1203 14:26:05.670452 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:05.670506 master-0 kubenswrapper[3173]: I1203 14:26:05.670516 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:05.670775 master-0 kubenswrapper[3173]: I1203 14:26:05.670561 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:05.670775 master-0 kubenswrapper[3173]: I1203 14:26:05.670649 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:05.670930 master-0 kubenswrapper[3173]: E1203 14:26:05.670843 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.670966 master-0 kubenswrapper[3173]: E1203 14:26:05.670932 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.670966 master-0 kubenswrapper[3173]: E1203 14:26:05.670946 3173 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671049 master-0 kubenswrapper[3173]: E1203 14:26:05.670846 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:05.671049 master-0 kubenswrapper[3173]: E1203 14:26:05.671024 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.671124 master-0 kubenswrapper[3173]: E1203 14:26:05.671033 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:05.671124 master-0 kubenswrapper[3173]: E1203 14:26:05.671075 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.671124 master-0 kubenswrapper[3173]: E1203 14:26:05.671090 3173 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671247 master-0 kubenswrapper[3173]: E1203 14:26:05.671035 3173 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671247 master-0 kubenswrapper[3173]: E1203 14:26:05.671160 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.671128528 +0000 UTC m=+27.152505910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671247 master-0 kubenswrapper[3173]: E1203 14:26:05.671184 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.671177829 +0000 UTC m=+27.152555211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671247 master-0 kubenswrapper[3173]: E1203 14:26:05.670863 3173 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.671247 master-0 kubenswrapper[3173]: E1203 14:26:05.671250 3173 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.671407 master-0 kubenswrapper[3173]: E1203 14:26:05.671268 3173 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671407 master-0 kubenswrapper[3173]: E1203 14:26:05.671298 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.671284532 +0000 UTC m=+27.152661914 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.671407 master-0 kubenswrapper[3173]: E1203 14:26:05.671321 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.671312083 +0000 UTC m=+27.152689465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.675622 master-0 kubenswrapper[3173]: E1203 14:26:05.675573 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.675622 master-0 kubenswrapper[3173]: E1203 14:26:05.675594 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.675622 master-0 kubenswrapper[3173]: E1203 14:26:05.675630 3173 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.675965 master-0 kubenswrapper[3173]: I1203 14:26:05.675699 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:05.675965 master-0 kubenswrapper[3173]: E1203 14:26:05.675739 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.175725048 +0000 UTC m=+26.657102470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.685275 master-0 kubenswrapper[3173]: I1203 14:26:05.685056 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:05.702723 master-0 kubenswrapper[3173]: E1203 14:26:05.702660 3173 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.702723 master-0 kubenswrapper[3173]: E1203 14:26:05.702696 3173 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.702723 master-0 kubenswrapper[3173]: E1203 14:26:05.702712 3173 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.703630 master-0 kubenswrapper[3173]: E1203 14:26:05.702779 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.202759325 +0000 UTC m=+26.684136707 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.746433 master-0 kubenswrapper[3173]: I1203 14:26:05.746164 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:05.757252 master-0 kubenswrapper[3173]: E1203 14:26:05.754659 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.757252 master-0 kubenswrapper[3173]: E1203 14:26:05.754695 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.757252 master-0 kubenswrapper[3173]: E1203 14:26:05.754711 3173 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.757252 master-0 kubenswrapper[3173]: E1203 14:26:05.754812 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.25478647 +0000 UTC m=+26.736163852 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.759201 master-0 kubenswrapper[3173]: W1203 14:26:05.759137 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7d6a05e_beee_40e9_b376_5c22e285b27a.slice/crio-e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a WatchSource:0}: Error finding container e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a: Status 404 returned error can't find the container with id e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a Dec 03 14:26:05.761769 master-0 kubenswrapper[3173]: I1203 14:26:05.761733 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:05.763835 master-0 kubenswrapper[3173]: I1203 14:26:05.763527 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:05.800792 master-0 kubenswrapper[3173]: E1203 14:26:05.800711 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.801039 master-0 kubenswrapper[3173]: E1203 14:26:05.800825 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.801039 master-0 kubenswrapper[3173]: E1203 14:26:05.800846 3173 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.801039 master-0 kubenswrapper[3173]: E1203 14:26:05.800931 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.300906038 +0000 UTC m=+26.782283420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.806207 master-0 kubenswrapper[3173]: I1203 14:26:05.806138 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:05.881136 master-0 kubenswrapper[3173]: E1203 14:26:05.880669 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.881136 master-0 kubenswrapper[3173]: E1203 14:26:05.881134 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.881136 master-0 kubenswrapper[3173]: E1203 14:26:05.881151 3173 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881503 master-0 kubenswrapper[3173]: I1203 14:26:05.881347 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:05.881503 master-0 kubenswrapper[3173]: E1203 14:26:05.881415 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.38136511 +0000 UTC m=+26.862742492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881503 master-0 kubenswrapper[3173]: E1203 14:26:05.881484 3173 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:05.881503 master-0 kubenswrapper[3173]: E1203 14:26:05.881501 3173 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.881503 master-0 kubenswrapper[3173]: I1203 14:26:05.881502 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:05.881717 master-0 kubenswrapper[3173]: I1203 14:26:05.881565 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:05.881717 master-0 kubenswrapper[3173]: E1203 14:26:05.881512 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881717 master-0 kubenswrapper[3173]: I1203 14:26:05.881659 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:05.881717 master-0 kubenswrapper[3173]: E1203 14:26:05.881688 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.881664708 +0000 UTC m=+27.363042160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881717 master-0 kubenswrapper[3173]: E1203 14:26:05.881700 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881741 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881761 3173 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881781 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881795 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881808 3173 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881841 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.881819133 +0000 UTC m=+27.363196585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881872 3173 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881911 3173 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.881938 master-0 kubenswrapper[3173]: E1203 14:26:05.881922 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: I1203 14:26:05.881946 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882028 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.881957977 +0000 UTC m=+27.363335359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882040 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882055 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882064 3173 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882094 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.88208566 +0000 UTC m=+27.363463142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.882422 master-0 kubenswrapper[3173]: E1203 14:26:05.882311 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.882297696 +0000 UTC m=+27.363675158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.885838 master-0 kubenswrapper[3173]: I1203 14:26:05.883500 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:05.885838 master-0 kubenswrapper[3173]: I1203 14:26:05.883728 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:05.885838 master-0 kubenswrapper[3173]: I1203 14:26:05.885545 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:05.929078 master-0 kubenswrapper[3173]: I1203 14:26:05.924878 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:05.929078 master-0 kubenswrapper[3173]: E1203 14:26:05.928589 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:05.929078 master-0 kubenswrapper[3173]: E1203 14:26:05.928613 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.929078 master-0 kubenswrapper[3173]: E1203 14:26:05.928627 3173 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.929078 master-0 kubenswrapper[3173]: E1203 14:26:05.928693 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.428672371 +0000 UTC m=+26.910049753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.936064 master-0 kubenswrapper[3173]: I1203 14:26:05.936023 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:05.946692 master-0 kubenswrapper[3173]: I1203 14:26:05.946643 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:05.949511 master-0 kubenswrapper[3173]: E1203 14:26:05.947970 3173 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.949511 master-0 kubenswrapper[3173]: E1203 14:26:05.947999 3173 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.949511 master-0 kubenswrapper[3173]: E1203 14:26:05.948029 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.949511 master-0 kubenswrapper[3173]: E1203 14:26:05.948083 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.448067731 +0000 UTC m=+26.929445113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.964889 master-0 kubenswrapper[3173]: I1203 14:26:05.958653 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-kk4tm" Dec 03 14:26:05.964889 master-0 kubenswrapper[3173]: I1203 14:26:05.962211 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:05.964889 master-0 kubenswrapper[3173]: I1203 14:26:05.963271 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:05.977134 master-0 kubenswrapper[3173]: I1203 14:26:05.976801 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:05.977134 master-0 kubenswrapper[3173]: W1203 14:26:05.976896 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc777c9de_1ace_46be_b5c2_c71d252f53f4.slice/crio-8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a WatchSource:0}: Error finding container 8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a: Status 404 returned error can't find the container with id 8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.986909 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.986954 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.986994 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.987040 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.987068 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987156 3173 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: I1203 14:26:05.987181 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987186 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987212 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987156 3173 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987230 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987240 3173 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987245 3173 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987249 3173 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987292 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.987271203 +0000 UTC m=+27.468648595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987314 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987351 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987369 3173 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987322 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987435 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987446 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987157 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987484 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987517 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987548 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.98752769 +0000 UTC m=+27.468905082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987568 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.987559541 +0000 UTC m=+27.468936923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987585 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.987575692 +0000 UTC m=+27.468953074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987599 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.987591422 +0000 UTC m=+27.468968804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:05.991895 master-0 kubenswrapper[3173]: E1203 14:26:05.987731 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.987710726 +0000 UTC m=+27.469088188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:05.999592 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:05.999738 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:05.999814 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:05.999867 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:05.999927 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:05.999995 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000055 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000097 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000132 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000170 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000204 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000269 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000325 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000390 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000446 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000511 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000556 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000603 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000667 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000715 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000752 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000794 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000833 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000872 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000906 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.000942 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.000972 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001026 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.001063 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001108 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.001157 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001251 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.001300 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001340 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.001380 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001416 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: I1203 14:26:06.001460 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:06.001558 master-0 kubenswrapper[3173]: E1203 14:26:06.001510 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:06.012825 master-0 kubenswrapper[3173]: W1203 14:26:06.012629 3173 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c95e54_b4ba_4b19_a97c_abcec840ac5d.slice/crio-25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca WatchSource:0}: Error finding container 25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca: Status 404 returned error can't find the container with id 25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca Dec 03 14:26:06.033985 master-0 kubenswrapper[3173]: E1203 14:26:06.033943 3173 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.033985 master-0 kubenswrapper[3173]: E1203 14:26:06.033981 3173 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.034113 master-0 kubenswrapper[3173]: E1203 14:26:06.033995 3173 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.034113 master-0 kubenswrapper[3173]: E1203 14:26:06.034092 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.53407167 +0000 UTC m=+27.015449052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.034179 master-0 kubenswrapper[3173]: E1203 14:26:06.034140 3173 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:06.034179 master-0 kubenswrapper[3173]: E1203 14:26:06.034156 3173 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.034179 master-0 kubenswrapper[3173]: E1203 14:26:06.034166 3173 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.034343 master-0 kubenswrapper[3173]: E1203 14:26:06.034199 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.534190724 +0000 UTC m=+27.015568106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.039496 master-0 kubenswrapper[3173]: I1203 14:26:06.039430 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:06.069343 master-0 kubenswrapper[3173]: E1203 14:26:06.066113 3173 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.069343 master-0 kubenswrapper[3173]: E1203 14:26:06.066148 3173 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.069343 master-0 kubenswrapper[3173]: E1203 14:26:06.066161 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.069343 master-0 kubenswrapper[3173]: E1203 14:26:06.066227 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.566206872 +0000 UTC m=+27.047584254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.083059 master-0 kubenswrapper[3173]: I1203 14:26:06.076455 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:06.086337 master-0 kubenswrapper[3173]: E1203 14:26:06.086294 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:06.086337 master-0 kubenswrapper[3173]: E1203 14:26:06.086329 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.086337 master-0 kubenswrapper[3173]: E1203 14:26:06.086341 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.086503 master-0 kubenswrapper[3173]: E1203 14:26:06.086402 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.586383544 +0000 UTC m=+27.067760926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.090652 master-0 kubenswrapper[3173]: I1203 14:26:06.090602 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:06.090817 master-0 kubenswrapper[3173]: E1203 14:26:06.090786 3173 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.090865 master-0 kubenswrapper[3173]: E1203 14:26:06.090816 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.090865 master-0 kubenswrapper[3173]: I1203 14:26:06.090819 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:06.090865 master-0 kubenswrapper[3173]: E1203 14:26:06.090860 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.09084573 +0000 UTC m=+27.572223112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.090975 master-0 kubenswrapper[3173]: I1203 14:26:06.090913 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:06.090975 master-0 kubenswrapper[3173]: E1203 14:26:06.090946 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.090975 master-0 kubenswrapper[3173]: E1203 14:26:06.090961 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.090975 master-0 kubenswrapper[3173]: E1203 14:26:06.090973 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.091166 master-0 kubenswrapper[3173]: E1203 14:26:06.091140 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.091123578 +0000 UTC m=+27.572500960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.091222 master-0 kubenswrapper[3173]: E1203 14:26:06.091173 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.091222 master-0 kubenswrapper[3173]: E1203 14:26:06.091210 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.091222 master-0 kubenswrapper[3173]: E1203 14:26:06.091222 3173 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.091346 master-0 kubenswrapper[3173]: E1203 14:26:06.091296 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.091271382 +0000 UTC m=+27.572648834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.108054 master-0 kubenswrapper[3173]: I1203 14:26:06.106192 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:06.117189 master-0 kubenswrapper[3173]: E1203 14:26:06.117146 3173 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:06.117189 master-0 kubenswrapper[3173]: E1203 14:26:06.117183 3173 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.117320 master-0 kubenswrapper[3173]: E1203 14:26:06.117201 3173 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.117320 master-0 kubenswrapper[3173]: E1203 14:26:06.117268 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.617250139 +0000 UTC m=+27.098627601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.128574 master-0 kubenswrapper[3173]: I1203 14:26:06.128500 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c"} Dec 03 14:26:06.134458 master-0 kubenswrapper[3173]: I1203 14:26:06.134365 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a"} Dec 03 14:26:06.140026 master-0 kubenswrapper[3173]: I1203 14:26:06.139538 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e"} Dec 03 14:26:06.140152 master-0 kubenswrapper[3173]: I1203 14:26:06.140031 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976"} Dec 03 14:26:06.142121 master-0 kubenswrapper[3173]: I1203 14:26:06.142056 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"daead6d8ab1c752d4b025d2534c179bf94c2bedf96e05f68ebbd381b99e740d4"} Dec 03 14:26:06.142121 master-0 kubenswrapper[3173]: I1203 14:26:06.142115 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-54f97f57-rr9px" event={"ID":"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8","Type":"ContainerStarted","Data":"ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f"} Dec 03 14:26:06.146746 master-0 kubenswrapper[3173]: I1203 14:26:06.146702 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"aae667aaf7d9ead8825ac51d2208cf004d37ea54b9226adb36042e31e9a6d6c0"} Dec 03 14:26:06.146746 master-0 kubenswrapper[3173]: I1203 14:26:06.146745 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764"} Dec 03 14:26:06.148919 master-0 kubenswrapper[3173]: I1203 14:26:06.148863 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"6ced9036c82611381ccbcbf8db527d98d0fa5f9b15d70705b33289b53878a29c"} Dec 03 14:26:06.148919 master-0 kubenswrapper[3173]: I1203 14:26:06.148921 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" event={"ID":"6935a3f8-723e-46e6-8498-483f34bf0825","Type":"ContainerStarted","Data":"ea701b61420d6a520ce249f2d459377fdee98f362b993accdff99d01371c65bf"} Dec 03 14:26:06.150071 master-0 kubenswrapper[3173]: I1203 14:26:06.149840 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75"} Dec 03 14:26:06.155573 master-0 kubenswrapper[3173]: I1203 14:26:06.151573 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a"} Dec 03 14:26:06.161077 master-0 kubenswrapper[3173]: I1203 14:26:06.161030 3173 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="bd245e8e5a862c9ab237c131c5a86ef8f53726005c94dae01555c97627b35f8a" exitCode=0 Dec 03 14:26:06.161315 master-0 kubenswrapper[3173]: I1203 14:26:06.161134 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerDied","Data":"bd245e8e5a862c9ab237c131c5a86ef8f53726005c94dae01555c97627b35f8a"} Dec 03 14:26:06.161403 master-0 kubenswrapper[3173]: I1203 14:26:06.161340 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286"} Dec 03 14:26:06.166693 master-0 kubenswrapper[3173]: I1203 14:26:06.163844 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304"} Dec 03 14:26:06.170713 master-0 kubenswrapper[3173]: I1203 14:26:06.168505 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca"} Dec 03 14:26:06.171194 master-0 kubenswrapper[3173]: I1203 14:26:06.171135 3173 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="e996313978fa83ba3f4b88159399d4394708e2235d676a3ec4d90f36c6ebdd4f" exitCode=0 Dec 03 14:26:06.171194 master-0 kubenswrapper[3173]: I1203 14:26:06.171191 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"e996313978fa83ba3f4b88159399d4394708e2235d676a3ec4d90f36c6ebdd4f"} Dec 03 14:26:06.171330 master-0 kubenswrapper[3173]: I1203 14:26:06.171223 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751"} Dec 03 14:26:06.173072 master-0 kubenswrapper[3173]: E1203 14:26:06.173029 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.173072 master-0 kubenswrapper[3173]: E1203 14:26:06.173058 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.173072 master-0 kubenswrapper[3173]: E1203 14:26:06.173072 3173 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.173200 master-0 kubenswrapper[3173]: E1203 14:26:06.173134 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.673114113 +0000 UTC m=+27.154491505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.183711 master-0 kubenswrapper[3173]: E1203 14:26:06.183660 3173 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.183711 master-0 kubenswrapper[3173]: E1203 14:26:06.183698 3173 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.183711 master-0 kubenswrapper[3173]: E1203 14:26:06.183713 3173 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.183879 master-0 kubenswrapper[3173]: E1203 14:26:06.183778 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.683757655 +0000 UTC m=+27.165135047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.186821 master-0 kubenswrapper[3173]: I1203 14:26:06.186724 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:06.193144 master-0 kubenswrapper[3173]: I1203 14:26:06.193066 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:06.193144 master-0 kubenswrapper[3173]: I1203 14:26:06.193145 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193179 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193209 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193246 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: E1203 14:26:06.193248 3173 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193271 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193314 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193336 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: E1203 14:26:06.193349 3173 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: I1203 14:26:06.193361 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.193395 master-0 kubenswrapper[3173]: E1203 14:26:06.193388 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193370218 +0000 UTC m=+28.674747680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193468 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193485 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193507 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193543 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193525802 +0000 UTC m=+28.674903184 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193564 3173 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193595 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193563 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193600 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193591174 +0000 UTC m=+28.674968656 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193642 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193660 3173 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193670 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193689 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193680837 +0000 UTC m=+28.675058319 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193716 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193748 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193764 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: I1203 14:26:06.193778 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193830 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.1937925 +0000 UTC m=+28.675169882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:06.193830 master-0 kubenswrapper[3173]: E1203 14:26:06.193839 3173 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.193858 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.193869 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193859722 +0000 UTC m=+28.675237204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.193901 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.193920 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.193925 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.193918343 +0000 UTC m=+28.675295825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.193985 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194058 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194088 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194143 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194163 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194206 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194232 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194278 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194302 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: I1203 14:26:06.194321 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194334 3173 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194432 3173 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194441 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194418208 +0000 UTC m=+28.675795670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194471 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194461839 +0000 UTC m=+28.675839221 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194486 3173 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194500 3173 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194523 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19451693 +0000 UTC m=+28.675894402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194537 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194529081 +0000 UTC m=+28.675906573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194540 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194560 3173 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:06.194546 master-0 kubenswrapper[3173]: E1203 14:26:06.194576 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194567462 +0000 UTC m=+28.675944954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194594 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194610 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194621 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194628 3173 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194639 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194649 3173 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194654 3173 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194599 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194589512 +0000 UTC m=+28.675967014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194682 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194675255 +0000 UTC m=+28.676052747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194690 3173 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194701 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194693595 +0000 UTC m=+28.676071077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194721 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.194713716 +0000 UTC m=+27.676091238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194742 3173 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194746 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.194739027 +0000 UTC m=+27.676116499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194756 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194763 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194764 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194756967 +0000 UTC m=+28.676134459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: I1203 14:26:06.194361 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194778 3173 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194786 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194779558 +0000 UTC m=+28.676156940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194827 3173 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194831 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194820589 +0000 UTC m=+28.676197971 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: I1203 14:26:06.194831 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194850 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19484235 +0000 UTC m=+28.676219832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194865 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: I1203 14:26:06.194872 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194896 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: I1203 14:26:06.194903 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194918 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194908861 +0000 UTC m=+28.676286243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194933 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194926972 +0000 UTC m=+28.676304344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194949 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194959 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: E1203 14:26:06.194977 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194970213 +0000 UTC m=+28.676347715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:06.195735 master-0 kubenswrapper[3173]: I1203 14:26:06.194956 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.194994 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.194986374 +0000 UTC m=+28.676363856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195038 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195060 3173 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195065 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195072 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195061586 +0000 UTC m=+28.676439018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195037 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195087 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195079916 +0000 UTC m=+28.676457298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195129 3173 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195131 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195123048 +0000 UTC m=+28.676500430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195158 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195175 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195175 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195215 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195184 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195237 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195227741 +0000 UTC m=+28.676605113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195186 3173 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195257 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195251981 +0000 UTC m=+28.676629363 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195270 3173 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195159 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195284 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195316 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195218 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195228 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195290 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195280202 +0000 UTC m=+28.676657584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195414 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.195370135 +0000 UTC m=+27.676747597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195209 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195431 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195423476 +0000 UTC m=+28.676800858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195457 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195446347 +0000 UTC m=+28.676823829 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195505 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: I1203 14:26:06.195533 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195571 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195577 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.1955516 +0000 UTC m=+28.676929062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195601 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195592281 +0000 UTC m=+28.676969753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195609 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:06.197478 master-0 kubenswrapper[3173]: E1203 14:26:06.195620 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195612071 +0000 UTC m=+28.676989563 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195636 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195629092 +0000 UTC m=+28.677006474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195650 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195645312 +0000 UTC m=+28.677022694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195668 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195716 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195737 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195760 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195782 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195793 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195814 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195834 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195840 3173 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195887 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195900 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.195890569 +0000 UTC m=+28.677268021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195918 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19590992 +0000 UTC m=+28.677287402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195936 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19592637 +0000 UTC m=+28.677303842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.195984 3173 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196034 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196025493 +0000 UTC m=+28.677402965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196076 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196103 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196094755 +0000 UTC m=+28.677472237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.195855 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196141 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196171 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196202 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196238 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196256 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196263 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196283 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19627585 +0000 UTC m=+28.677653342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196301 3173 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196310 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196324 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196318031 +0000 UTC m=+28.677695413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: I1203 14:26:06.196344 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196354 3173 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196389 3173 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:06.199201 master-0 kubenswrapper[3173]: E1203 14:26:06.196406 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196400384 +0000 UTC m=+28.677777766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196422 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196452 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196471 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196496 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196516 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196545 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196601 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196623 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196643 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196662 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196673 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196685 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196706 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196694382 +0000 UTC m=+28.678071864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196725 3173 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196731 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196745 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196739383 +0000 UTC m=+28.678116765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196758 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196752124 +0000 UTC m=+28.678129506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196775 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196792 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196805 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196795125 +0000 UTC m=+28.678172597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196827 3173 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196856 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: I1203 14:26:06.196853 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196874 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196868897 +0000 UTC m=+28.678246269 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196891 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196910 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196905258 +0000 UTC m=+28.678282640 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196950 3173 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196977 3173 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.196979 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19697078 +0000 UTC m=+28.678348262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197032 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.196997231 +0000 UTC m=+28.678374603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197037 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197063 3173 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197064 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197056712 +0000 UTC m=+28.678434094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197087 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197082223 +0000 UTC m=+28.678459605 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197100 3173 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:06.202306 master-0 kubenswrapper[3173]: E1203 14:26:06.197117 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197125 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197117834 +0000 UTC m=+28.678495306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197136 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197131495 +0000 UTC m=+28.678508877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197149 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197143505 +0000 UTC m=+28.678520887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197169 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197179 3173 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197186 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197181396 +0000 UTC m=+28.678558778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197208 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197200856 +0000 UTC m=+28.678578338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197209 3173 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197233 3173 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197238 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197231987 +0000 UTC m=+28.678609499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197254 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197246828 +0000 UTC m=+28.678624210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197269 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197305 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197324 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.196951 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197270 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197262158 +0000 UTC m=+28.678639650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197366 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197360141 +0000 UTC m=+28.678737523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197382 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197373991 +0000 UTC m=+28.678751463 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197394 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197388922 +0000 UTC m=+28.678766404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197416 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197447 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197487 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197545 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197575 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197603 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197622 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197640 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197660 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197682 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: I1203 14:26:06.197702 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197711 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:06.204369 master-0 kubenswrapper[3173]: E1203 14:26:06.197306 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197743 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197735372 +0000 UTC m=+28.679112754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197744 3173 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197770 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197755582 +0000 UTC m=+28.679133064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197778 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197779 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197785 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197824 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: I1203 14:26:06.197720 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197832 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197905 3173 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197904 3173 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197799 3173 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197789 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197779353 +0000 UTC m=+28.679156825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197852 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198023 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.197983659 +0000 UTC m=+28.679361041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197859 3173 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198052 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19803892 +0000 UTC m=+28.679416352 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.197859 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198078 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198067311 +0000 UTC m=+28.679444793 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198095 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198088222 +0000 UTC m=+28.679465694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: I1203 14:26:06.198140 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: I1203 14:26:06.198176 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198190 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198220 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198211115 +0000 UTC m=+28.679588587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: I1203 14:26:06.198248 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198289 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198271497 +0000 UTC m=+28.679648979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198308 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198309 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198327 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198319228 +0000 UTC m=+28.679696600 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198342 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198336399 +0000 UTC m=+28.679713781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198361 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198348919 +0000 UTC m=+28.679726301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198371 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198366659 +0000 UTC m=+28.679744041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:06.205630 master-0 kubenswrapper[3173]: E1203 14:26:06.198385 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19837859 +0000 UTC m=+28.679755972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198410 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198438 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198449 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198441352 +0000 UTC m=+28.679818734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198466 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198459172 +0000 UTC m=+28.679836554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198479 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198472752 +0000 UTC m=+28.679850134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198479 3173 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198483 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198507 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198501583 +0000 UTC m=+28.679878965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198507 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198527 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198515924 +0000 UTC m=+28.679893306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198549 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198551 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198574 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198567055 +0000 UTC m=+28.679944437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198592 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198602 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198625 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198618657 +0000 UTC m=+28.679996039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198624 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198656 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198662 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198675 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198669998 +0000 UTC m=+28.680047380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198692 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198711 3173 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198715 3173 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198716 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198735 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19872921 +0000 UTC m=+28.680106592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198762 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19875275 +0000 UTC m=+28.680130132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198737 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198792 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198742 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198834 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: E1203 14:26:06.198830 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198820272 +0000 UTC m=+28.680197654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198874 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:06.264025 master-0 kubenswrapper[3173]: I1203 14:26:06.198895 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198912 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198904355 +0000 UTC m=+28.680281737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198928 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198921885 +0000 UTC m=+28.680299267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198943 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198952 3173 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198966 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198957546 +0000 UTC m=+28.680334928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.198963 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.198981 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.198972507 +0000 UTC m=+28.680349889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.198999 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199021 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199052 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199042159 +0000 UTC m=+28.680419541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199080 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199098 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199104 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19909817 +0000 UTC m=+28.680475552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199125 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199164 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199195 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199200 3173 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199214 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199218 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199210643 +0000 UTC m=+28.680588025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199235 3173 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199198 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199238 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199230684 +0000 UTC m=+28.680608136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199256 3173 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199276 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199286 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199278565 +0000 UTC m=+28.680656057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199304 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199315 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199334 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199318096 +0000 UTC m=+28.680695538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199341 3173 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199375 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199350177 +0000 UTC m=+28.680727639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199473 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199495 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199481551 +0000 UTC m=+28.680859103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199548 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: I1203 14:26:06.199581 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.265737 master-0 kubenswrapper[3173]: E1203 14:26:06.199554 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199619 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199641 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199633785 +0000 UTC m=+28.681011167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199659 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199673 3173 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199677 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199602 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199694 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199684287 +0000 UTC m=+28.681061749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199724 3173 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199736 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199744 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199737878 +0000 UTC m=+28.681115260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199762 3173 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199774 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.199763639 +0000 UTC m=+28.681141211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199796 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19978286 +0000 UTC m=+28.681160342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.199818 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.19980912 +0000 UTC m=+28.681186602 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199901 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199932 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.199966 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200018 3173 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.200029 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200048 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200055 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200047187 +0000 UTC m=+28.681424569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200067 3173 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200079 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200072668 +0000 UTC m=+28.681450050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200095 3173 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.200102 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200129 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200119179 +0000 UTC m=+28.681496641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200144 3173 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200175 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200166771 +0000 UTC m=+28.681544213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.200176 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200203 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200190521 +0000 UTC m=+28.681568003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200249 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: I1203 14:26:06.200278 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200303 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200331 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200323275 +0000 UTC m=+28.681700757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:06.267165 master-0 kubenswrapper[3173]: E1203 14:26:06.200331 3173 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200346 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200339135 +0000 UTC m=+28.681716517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200306 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200364 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200357346 +0000 UTC m=+28.681734908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200393 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200423 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200446 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200461 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200485 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200476619 +0000 UTC m=+28.681854171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200512 3173 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200525 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200539 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200531561 +0000 UTC m=+28.681908943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200524 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200561 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200553842 +0000 UTC m=+28.681931224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200565 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200579 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200594 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200586612 +0000 UTC m=+28.681964104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200615 3173 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200615 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200634 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200627144 +0000 UTC m=+28.682004526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200648 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200666 3173 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200670 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200689 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200709 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200752 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200758 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200760 3173 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200771 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200756597 +0000 UTC m=+28.682133979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200764 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200790 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200788 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200780748 +0000 UTC m=+28.682158130 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200813 3173 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200824 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: I1203 14:26:06.200845 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:06.268622 master-0 kubenswrapper[3173]: E1203 14:26:06.200854 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.2008434 +0000 UTC m=+28.682220852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200875 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.20086864 +0000 UTC m=+28.682246022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200883 3173 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200887 3173 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200895 3173 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200894 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200889021 +0000 UTC m=+28.682266393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.200932 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.200921332 +0000 UTC m=+28.682298794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201029 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201045 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201062 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201066 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201060646 +0000 UTC m=+28.682438028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201091 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201083227 +0000 UTC m=+28.682460689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201092 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201105 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201097327 +0000 UTC m=+28.682474789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201123 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201150 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201154 3173 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201168 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201159469 +0000 UTC m=+28.682536841 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201194 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201212 3173 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201222 3173 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201231 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201236 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201228451 +0000 UTC m=+28.682605923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201251 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201243741 +0000 UTC m=+28.682621123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201263 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201287 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201280462 +0000 UTC m=+28.682657844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201305 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201340 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201348 3173 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201363 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201370 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201360724 +0000 UTC m=+28.682738186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201388 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201381285 +0000 UTC m=+28.682758667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201408 3173 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: I1203 14:26:06.201411 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:06.270230 master-0 kubenswrapper[3173]: E1203 14:26:06.201430 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201438 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201428086 +0000 UTC m=+28.682805518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201455 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201447557 +0000 UTC m=+28.682824939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201469 3173 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201472 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201509 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201499318 +0000 UTC m=+28.682876700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201511 3173 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201538 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201543 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201535949 +0000 UTC m=+28.682913421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201563 3173 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201584 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201587 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201580801 +0000 UTC m=+28.682958183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201612 3173 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201630 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201625682 +0000 UTC m=+28.683003064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201625 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201655 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201673 3173 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201676 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201725 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201736 3173 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201738 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201756 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201748005 +0000 UTC m=+28.683125387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201768 3173 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201779 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201786 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201781186 +0000 UTC m=+28.683158568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201818 3173 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201819 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201837 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201832288 +0000 UTC m=+28.683209670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201855 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201849698 +0000 UTC m=+28.683227080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201863 3173 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201872 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201890 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201883239 +0000 UTC m=+28.683260621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201911 3173 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: I1203 14:26:06.201912 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:06.271707 master-0 kubenswrapper[3173]: E1203 14:26:06.201928 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.20192307 +0000 UTC m=+28.683300452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.201941 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.201971 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.201980 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.201988 3173 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.201970 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.201989 3173 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.201995 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.201982472 +0000 UTC m=+28.683359924 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202024 3173 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202054 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202042744 +0000 UTC m=+28.683420316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202084 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202097 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202084675 +0000 UTC m=+28.683462147 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202114 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202105176 +0000 UTC m=+28.683482678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202136 3173 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202144 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202166 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202158117 +0000 UTC m=+28.683535609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202194 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202222 3173 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202231 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202221529 +0000 UTC m=+28.683598981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202257 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.20224842 +0000 UTC m=+28.683625862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202195 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202289 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202311 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202332 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202357 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202383 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202414 3173 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202443 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202457 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202441665 +0000 UTC m=+28.683819117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202460 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202468 3173 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202478 3173 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202501 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202490226 +0000 UTC m=+28.683867678 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202527 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: I1203 14:26:06.202565 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202594 3173 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:06.273042 master-0 kubenswrapper[3173]: E1203 14:26:06.202599 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202582609 +0000 UTC m=+28.683960021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202625 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.2026164 +0000 UTC m=+28.683993882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202626 3173 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202645 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202635251 +0000 UTC m=+28.684012743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202729 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202720073 +0000 UTC m=+28.684097455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202745 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202739194 +0000 UTC m=+28.684116576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: I1203 14:26:06.202771 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: I1203 14:26:06.202795 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: I1203 14:26:06.202833 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202891 3173 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202912 3173 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202929 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202918449 +0000 UTC m=+28.684295831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202946 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202939259 +0000 UTC m=+28.684316641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.202961 3173 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.203018 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.202992821 +0000 UTC m=+28.684370263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.204344 3173 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.204367 3173 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.204379 3173 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.274455 master-0 kubenswrapper[3173]: E1203 14:26:06.204422 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.704409231 +0000 UTC m=+27.185786703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.284252 master-0 kubenswrapper[3173]: I1203 14:26:06.284191 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:06.298244 master-0 kubenswrapper[3173]: E1203 14:26:06.298207 3173 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.298244 master-0 kubenswrapper[3173]: E1203 14:26:06.298241 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.298370 master-0 kubenswrapper[3173]: E1203 14:26:06.298303 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.798283713 +0000 UTC m=+27.279661095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.299493 master-0 kubenswrapper[3173]: I1203 14:26:06.299449 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:06.300357 master-0 kubenswrapper[3173]: E1203 14:26:06.300329 3173 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.300357 master-0 kubenswrapper[3173]: E1203 14:26:06.300352 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.300446 master-0 kubenswrapper[3173]: E1203 14:26:06.300387 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:06.800377642 +0000 UTC m=+27.281755024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.300446 master-0 kubenswrapper[3173]: I1203 14:26:06.300403 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:06.301647 master-0 kubenswrapper[3173]: I1203 14:26:06.301613 3173 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:06.304920 master-0 kubenswrapper[3173]: I1203 14:26:06.304872 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:06.305092 master-0 kubenswrapper[3173]: I1203 14:26:06.305029 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:06.305092 master-0 kubenswrapper[3173]: I1203 14:26:06.305078 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305044 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305143 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305155 3173 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305241 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305277 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305290 3173 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305318 3173 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305337 3173 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305338 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.305324333 +0000 UTC m=+27.786701715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305347 3173 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.305362 master-0 kubenswrapper[3173]: E1203 14:26:06.305369 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.305359404 +0000 UTC m=+27.786736906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: E1203 14:26:06.305490 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.305479377 +0000 UTC m=+27.786856759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: I1203 14:26:06.306032 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: I1203 14:26:06.306119 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: E1203 14:26:06.306145 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: E1203 14:26:06.306161 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.306174 master-0 kubenswrapper[3173]: E1203 14:26:06.306169 3173 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.306394 master-0 kubenswrapper[3173]: E1203 14:26:06.306234 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:06.306394 master-0 kubenswrapper[3173]: E1203 14:26:06.306243 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.306394 master-0 kubenswrapper[3173]: E1203 14:26:06.306249 3173 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.306394 master-0 kubenswrapper[3173]: E1203 14:26:06.306275 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.306267759 +0000 UTC m=+28.787645141 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.306394 master-0 kubenswrapper[3173]: E1203 14:26:06.306288 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.30628316 +0000 UTC m=+28.787660542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.313896 master-0 kubenswrapper[3173]: I1203 14:26:06.313848 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:06.328933 master-0 kubenswrapper[3173]: I1203 14:26:06.328901 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:06.398670 master-0 kubenswrapper[3173]: I1203 14:26:06.398621 3173 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:06.402678 master-0 kubenswrapper[3173]: I1203 14:26:06.402640 3173 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:06.402678 master-0 kubenswrapper[3173]: [-]has-synced failed: reason withheld Dec 03 14:26:06.402678 master-0 kubenswrapper[3173]: [+]process-running ok Dec 03 14:26:06.402678 master-0 kubenswrapper[3173]: healthz check failed Dec 03 14:26:06.402910 master-0 kubenswrapper[3173]: I1203 14:26:06.402699 3173 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:06.410176 master-0 kubenswrapper[3173]: I1203 14:26:06.410122 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:06.410359 master-0 kubenswrapper[3173]: E1203 14:26:06.410326 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:06.410359 master-0 kubenswrapper[3173]: E1203 14:26:06.410348 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.410359 master-0 kubenswrapper[3173]: E1203 14:26:06.410358 3173 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.410468 master-0 kubenswrapper[3173]: E1203 14:26:06.410415 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.410397662 +0000 UTC m=+27.891775044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.410859 master-0 kubenswrapper[3173]: I1203 14:26:06.410830 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:06.410944 master-0 kubenswrapper[3173]: E1203 14:26:06.410918 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.410944 master-0 kubenswrapper[3173]: I1203 14:26:06.410932 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: I1203 14:26:06.410992 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: E1203 14:26:06.410942 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: E1203 14:26:06.411035 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: E1203 14:26:06.411049 3173 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: E1203 14:26:06.411063 3173 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.411084 master-0 kubenswrapper[3173]: E1203 14:26:06.411073 3173 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411111 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.411099902 +0000 UTC m=+28.892477284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411156 3173 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411172 3173 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411181 3173 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: I1203 14:26:06.411156 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411200 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.411187655 +0000 UTC m=+28.892565037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411354 master-0 kubenswrapper[3173]: E1203 14:26:06.411235 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.411224366 +0000 UTC m=+28.892601748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411700 master-0 kubenswrapper[3173]: E1203 14:26:06.411212 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.411700 master-0 kubenswrapper[3173]: E1203 14:26:06.411444 3173 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.411700 master-0 kubenswrapper[3173]: E1203 14:26:06.411455 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.411700 master-0 kubenswrapper[3173]: E1203 14:26:06.411487 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.411477903 +0000 UTC m=+28.892855285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.513651 master-0 kubenswrapper[3173]: I1203 14:26:06.513566 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:06.513651 master-0 kubenswrapper[3173]: I1203 14:26:06.513629 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:06.513651 master-0 kubenswrapper[3173]: I1203 14:26:06.513651 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: I1203 14:26:06.513745 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: E1203 14:26:06.513787 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: E1203 14:26:06.513820 3173 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: E1203 14:26:06.513841 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: E1203 14:26:06.513852 3173 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: I1203 14:26:06.513790 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:06.513974 master-0 kubenswrapper[3173]: E1203 14:26:06.513866 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.513953 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514034 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.513786 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514053 3173 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514069 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514082 3173 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.513907 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.513885067 +0000 UTC m=+28.995262459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.513902 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514143 3173 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514152 3173 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514262 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.514248467 +0000 UTC m=+28.995625849 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.514287 master-0 kubenswrapper[3173]: E1203 14:26:06.514294 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.514286648 +0000 UTC m=+28.995664030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514764 master-0 kubenswrapper[3173]: E1203 14:26:06.514313 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.514306519 +0000 UTC m=+28.995683901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.514764 master-0 kubenswrapper[3173]: E1203 14:26:06.514327 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.514321829 +0000 UTC m=+28.995699211 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.515491 master-0 kubenswrapper[3173]: I1203 14:26:06.515455 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:06.515550 master-0 kubenswrapper[3173]: I1203 14:26:06.515517 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:06.515642 master-0 kubenswrapper[3173]: E1203 14:26:06.515608 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:06.515642 master-0 kubenswrapper[3173]: E1203 14:26:06.515629 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.515642 master-0 kubenswrapper[3173]: E1203 14:26:06.515637 3173 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.515758 master-0 kubenswrapper[3173]: E1203 14:26:06.515676 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.515660647 +0000 UTC m=+27.997038029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.515758 master-0 kubenswrapper[3173]: E1203 14:26:06.515695 3173 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.515758 master-0 kubenswrapper[3173]: E1203 14:26:06.515706 3173 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.515758 master-0 kubenswrapper[3173]: E1203 14:26:06.515714 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.515758 master-0 kubenswrapper[3173]: E1203 14:26:06.515744 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.51573653 +0000 UTC m=+27.997113912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.620622 master-0 kubenswrapper[3173]: I1203 14:26:06.620267 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:06.620622 master-0 kubenswrapper[3173]: I1203 14:26:06.620486 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:06.620622 master-0 kubenswrapper[3173]: I1203 14:26:06.620540 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:06.620622 master-0 kubenswrapper[3173]: I1203 14:26:06.620574 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620695 3173 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620728 3173 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620743 3173 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620811 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.620794499 +0000 UTC m=+28.102171881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620859 3173 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620879 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:06.620949 master-0 kubenswrapper[3173]: E1203 14:26:06.620937 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.620957 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.620982 3173 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.620996 3173 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: I1203 14:26:06.620893 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.621019 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.621081 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.621054806 +0000 UTC m=+28.102432188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.620900 3173 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.621194 master-0 kubenswrapper[3173]: E1203 14:26:06.621177 3173 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.621495 master-0 kubenswrapper[3173]: E1203 14:26:06.620867 3173 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:06.621495 master-0 kubenswrapper[3173]: E1203 14:26:06.621250 3173 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.621495 master-0 kubenswrapper[3173]: E1203 14:26:06.621115 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.621095857 +0000 UTC m=+28.102473229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.621495 master-0 kubenswrapper[3173]: E1203 14:26:06.621313 3173 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.622093 master-0 kubenswrapper[3173]: E1203 14:26:06.622066 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.622042974 +0000 UTC m=+28.103420356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.622135 master-0 kubenswrapper[3173]: E1203 14:26:06.622101 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.622093776 +0000 UTC m=+28.103471148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.622645 master-0 kubenswrapper[3173]: I1203 14:26:06.622619 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:06.622788 master-0 kubenswrapper[3173]: E1203 14:26:06.622767 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.622825 master-0 kubenswrapper[3173]: E1203 14:26:06.622787 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.622825 master-0 kubenswrapper[3173]: E1203 14:26:06.622797 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.622883 master-0 kubenswrapper[3173]: E1203 14:26:06.622830 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.622820356 +0000 UTC m=+29.104197738 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.724657 master-0 kubenswrapper[3173]: I1203 14:26:06.724529 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724686 3173 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724707 3173 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: I1203 14:26:06.724706 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724720 3173 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: I1203 14:26:06.724871 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724769 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724914 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724925 3173 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.724959 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.724942282 +0000 UTC m=+29.206319664 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: I1203 14:26:06.725042 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725116 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725129 3173 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725125 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.725045495 +0000 UTC m=+29.206422887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725138 3173 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725207 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725248 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.725238071 +0000 UTC m=+29.206615553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725250 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725269 3173 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.725607 master-0 kubenswrapper[3173]: E1203 14:26:06.725418 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.725397165 +0000 UTC m=+29.206774547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.726692 master-0 kubenswrapper[3173]: I1203 14:26:06.726654 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:06.726782 master-0 kubenswrapper[3173]: I1203 14:26:06.726756 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:06.726782 master-0 kubenswrapper[3173]: E1203 14:26:06.726765 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:06.726855 master-0 kubenswrapper[3173]: I1203 14:26:06.726784 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:06.726855 master-0 kubenswrapper[3173]: E1203 14:26:06.726795 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.726855 master-0 kubenswrapper[3173]: E1203 14:26:06.726811 3173 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.726961 master-0 kubenswrapper[3173]: E1203 14:26:06.726863 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.726847596 +0000 UTC m=+28.208225078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.726961 master-0 kubenswrapper[3173]: E1203 14:26:06.726895 3173 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.726961 master-0 kubenswrapper[3173]: E1203 14:26:06.726907 3173 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.726961 master-0 kubenswrapper[3173]: E1203 14:26:06.726915 3173 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.726961 master-0 kubenswrapper[3173]: E1203 14:26:06.726942 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.726935229 +0000 UTC m=+28.208312611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.727295 master-0 kubenswrapper[3173]: E1203 14:26:06.727030 3173 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.727295 master-0 kubenswrapper[3173]: E1203 14:26:06.727052 3173 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.727295 master-0 kubenswrapper[3173]: E1203 14:26:06.727068 3173 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.727295 master-0 kubenswrapper[3173]: E1203 14:26:06.727154 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.727142665 +0000 UTC m=+28.208520117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.830103 master-0 kubenswrapper[3173]: I1203 14:26:06.830021 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:06.830359 master-0 kubenswrapper[3173]: I1203 14:26:06.830168 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:06.830359 master-0 kubenswrapper[3173]: E1203 14:26:06.830255 3173 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.830359 master-0 kubenswrapper[3173]: E1203 14:26:06.830297 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.830359 master-0 kubenswrapper[3173]: E1203 14:26:06.830341 3173 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.830359 master-0 kubenswrapper[3173]: E1203 14:26:06.830362 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.830582 master-0 kubenswrapper[3173]: E1203 14:26:06.830420 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.830398913 +0000 UTC m=+28.311776285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.830641 master-0 kubenswrapper[3173]: E1203 14:26:06.830617 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:07.830589148 +0000 UTC m=+28.311966610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:06.933699 master-0 kubenswrapper[3173]: I1203 14:26:06.933653 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:06.933699 master-0 kubenswrapper[3173]: I1203 14:26:06.933710 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:06.933868 master-0 kubenswrapper[3173]: I1203 14:26:06.933739 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:06.933868 master-0 kubenswrapper[3173]: I1203 14:26:06.933817 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:06.934132 master-0 kubenswrapper[3173]: I1203 14:26:06.933981 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934306 3173 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934346 3173 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934350 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934380 3173 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934361 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934393 3173 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934421 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934439 3173 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934306 3173 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934448 3173 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934459 3173 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.934661 master-0 kubenswrapper[3173]: E1203 14:26:06.934647 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934692 3173 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934709 3173 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934659 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934459 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.934443633 +0000 UTC m=+29.415821015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934946 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.934923186 +0000 UTC m=+29.416300568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934977 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.934968837 +0000 UTC m=+29.416346219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.934993 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.934988128 +0000 UTC m=+29.416365510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:06.935163 master-0 kubenswrapper[3173]: E1203 14:26:06.935055 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:08.93504566 +0000 UTC m=+29.416423122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997069 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997094 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997083 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997143 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997115 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997169 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997144 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997209 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997126 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997170 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997146 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997227 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.997218 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997243 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997072 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997217 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997216 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997225 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997262 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.997532 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997603 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997657 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997690 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997714 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997745 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997767 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997790 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997820 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997850 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997878 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997906 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997931 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997954 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.997978 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998025 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998054 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.998150 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998201 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998238 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998265 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998294 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998326 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998355 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998383 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998406 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.998533 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998566 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998594 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998616 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998638 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.998691 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998722 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998745 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.998801 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: I1203 14:26:06.998832 3173 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.998902 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999028 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999209 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999263 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999332 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999385 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999447 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:07.002049 master-0 kubenswrapper[3173]: E1203 14:26:06.999510 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999579 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999652 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999732 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999804 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999871 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999923 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:06.999977 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000081 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000163 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000219 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000279 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000361 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000431 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000502 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000568 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000646 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000704 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000793 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000846 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000910 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.000988 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001076 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001145 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001199 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001290 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001367 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001466 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001541 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001598 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001654 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001716 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:07.004925 master-0 kubenswrapper[3173]: E1203 14:26:07.001773 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:07.006170 master-0 kubenswrapper[3173]: E1203 14:26:07.001846 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:07.006170 master-0 kubenswrapper[3173]: E1203 14:26:07.001900 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:07.006170 master-0 kubenswrapper[3173]: E1203 14:26:07.001971 3173 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:07.038184 master-0 kubenswrapper[3173]: I1203 14:26:07.038119 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:07.038184 master-0 kubenswrapper[3173]: I1203 14:26:07.038195 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:07.038681 master-0 kubenswrapper[3173]: I1203 14:26:07.038224 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:07.038681 master-0 kubenswrapper[3173]: I1203 14:26:07.038442 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:07.038681 master-0 kubenswrapper[3173]: I1203 14:26:07.038471 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:07.038681 master-0 kubenswrapper[3173]: I1203 14:26:07.038596 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:07.038681 master-0 kubenswrapper[3173]: E1203 14:26:07.038633 3173 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038665 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038714 3173 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038732 3173 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038741 3173 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038784 3173 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038742 3173 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038800 3173 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038808 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038809 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038836 3173 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038679 3173 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038848 3173 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.038865 master-0 kubenswrapper[3173]: E1203 14:26:07.038860 3173 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.038802 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.038784421 +0000 UTC m=+29.520161803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.038784 3173 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.038938 3173 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.038990 3173 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.039089 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.039063889 +0000 UTC m=+29.520441301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.039122 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.039110061 +0000 UTC m=+29.520487473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.039147 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.039134281 +0000 UTC m=+29.520511693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.039337 master-0 kubenswrapper[3173]: E1203 14:26:07.039175 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.039164682 +0000 UTC m=+29.520542104 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.039740 master-0 kubenswrapper[3173]: E1203 14:26:07.039433 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.039411309 +0000 UTC m=+29.520788691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.148773 master-0 kubenswrapper[3173]: I1203 14:26:07.148380 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b340553b-d483-4839-8328-518f27770832\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"samples-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92p99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-92p99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-6d64b47964-jjd7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: I1203 14:26:07.151254 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.151739 3173 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.151789 3173 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.151873 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.151852978 +0000 UTC m=+29.633230360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: I1203 14:26:07.152082 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: I1203 14:26:07.152156 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152434 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152461 3173 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152459 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152515 3173 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152725 3173 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.152833 master-0 kubenswrapper[3173]: E1203 14:26:07.152474 3173 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.153500 master-0 kubenswrapper[3173]: E1203 14:26:07.152871 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.152824065 +0000 UTC m=+29.634201487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.153500 master-0 kubenswrapper[3173]: E1203 14:26:07.153050 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.153032341 +0000 UTC m=+29.634409903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.175137 master-0 kubenswrapper[3173]: I1203 14:26:07.175092 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c"} Dec 03 14:26:07.176180 master-0 kubenswrapper[3173]: I1203 14:26:07.176130 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4"} Dec 03 14:26:07.177899 master-0 kubenswrapper[3173]: I1203 14:26:07.177031 3173 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6"} Dec 03 14:26:07.242423 master-0 kubenswrapper[3173]: I1203 14:26:07.242330 3173 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eecc43f5-708f-4395-98cc-696b243d6321\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/mcs\\\",\\\"name\\\":\\\"certs\\\"},{\\\"mountPath\\\":\\\"/etc/mcs/bootstrap-token\\\",\\\"name\\\":\\\"node-bootstrap-token\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szdzx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-pvrfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:07.256833 master-0 kubenswrapper[3173]: I1203 14:26:07.256745 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:07.257080 master-0 kubenswrapper[3173]: I1203 14:26:07.256852 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:07.257080 master-0 kubenswrapper[3173]: E1203 14:26:07.256987 3173 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:07.257080 master-0 kubenswrapper[3173]: E1203 14:26:07.257039 3173 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.257080 master-0 kubenswrapper[3173]: E1203 14:26:07.257052 3173 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: I1203 14:26:07.257085 3173 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257119 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.257096742 +0000 UTC m=+29.738474124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257158 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257196 3173 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257211 3173 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257233 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257258 3173 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:07.257270 master-0 kubenswrapper[3173]: E1203 14:26:07.257270 3173 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.257648 master-0 kubenswrapper[3173]: E1203 14:26:07.257328 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.257312249 +0000 UTC m=+29.738689721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.257648 master-0 kubenswrapper[3173]: E1203 14:26:07.257391 3173 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.25736801 +0000 UTC m=+29.738745472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:07.310676 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:26:07.327573 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:26:07.327795 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:26:07.328727 master-0 systemd[1]: kubelet.service: Consumed 3.553s CPU time. Dec 03 14:26:07.337899 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:26:07.475388 master-0 kubenswrapper[4318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.475388 master-0 kubenswrapper[4318]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:26:07.475388 master-0 kubenswrapper[4318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.476267 master-0 kubenswrapper[4318]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.476267 master-0 kubenswrapper[4318]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:26:07.476267 master-0 kubenswrapper[4318]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.476267 master-0 kubenswrapper[4318]: I1203 14:26:07.475564 4318 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479835 4318 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479856 4318 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479861 4318 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479865 4318 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479869 4318 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479873 4318 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479877 4318 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479881 4318 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479885 4318 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479888 4318 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479892 4318 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479898 4318 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479902 4318 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479906 4318 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479909 4318 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479913 4318 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479916 4318 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479920 4318 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479924 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.479919 master-0 kubenswrapper[4318]: W1203 14:26:07.479928 4318 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479933 4318 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479937 4318 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479942 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479948 4318 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479952 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479956 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479962 4318 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479968 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479972 4318 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479976 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479980 4318 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479983 4318 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479987 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479991 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479995 4318 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.479998 4318 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.480017 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.480021 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.480025 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.480781 master-0 kubenswrapper[4318]: W1203 14:26:07.480029 4318 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480034 4318 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480039 4318 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480043 4318 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480047 4318 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480051 4318 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480055 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480058 4318 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480062 4318 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480068 4318 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480071 4318 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480075 4318 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480078 4318 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480082 4318 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480087 4318 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480090 4318 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480094 4318 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480097 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480101 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.481903 master-0 kubenswrapper[4318]: W1203 14:26:07.480104 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480108 4318 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480115 4318 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480120 4318 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480125 4318 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480130 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480133 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480137 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480141 4318 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480144 4318 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480148 4318 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480151 4318 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480155 4318 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: W1203 14:26:07.480161 4318 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480257 4318 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480265 4318 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480272 4318 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480278 4318 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480283 4318 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480295 4318 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480305 4318 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:26:07.482619 master-0 kubenswrapper[4318]: I1203 14:26:07.480310 4318 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480314 4318 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480319 4318 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480323 4318 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480328 4318 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480332 4318 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480337 4318 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480344 4318 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480348 4318 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480352 4318 flags.go:64] FLAG: --cloud-config="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480356 4318 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480360 4318 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480365 4318 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480369 4318 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480373 4318 flags.go:64] FLAG: --config-dir="" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480380 4318 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480385 4318 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480390 4318 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480395 4318 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480400 4318 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480405 4318 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480410 4318 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480414 4318 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480419 4318 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480427 4318 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:26:07.483606 master-0 kubenswrapper[4318]: I1203 14:26:07.480431 4318 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480436 4318 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480440 4318 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480445 4318 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480450 4318 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480460 4318 flags.go:64] FLAG: --enable-server="true" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480464 4318 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480472 4318 flags.go:64] FLAG: --event-burst="100" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480477 4318 flags.go:64] FLAG: --event-qps="50" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480481 4318 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480486 4318 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480490 4318 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480495 4318 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480500 4318 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480504 4318 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480508 4318 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480515 4318 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480519 4318 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480524 4318 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480528 4318 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480532 4318 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480535 4318 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480540 4318 flags.go:64] FLAG: --feature-gates="" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480545 4318 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480551 4318 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:26:07.484922 master-0 kubenswrapper[4318]: I1203 14:26:07.480556 4318 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480560 4318 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480565 4318 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480569 4318 flags.go:64] FLAG: --help="false" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480573 4318 flags.go:64] FLAG: --hostname-override="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480577 4318 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480583 4318 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480587 4318 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480593 4318 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480597 4318 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480601 4318 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480605 4318 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480609 4318 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480613 4318 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480618 4318 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480623 4318 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480629 4318 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480633 4318 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480637 4318 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480643 4318 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480648 4318 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480652 4318 flags.go:64] FLAG: --lock-file="" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480657 4318 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480662 4318 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480667 4318 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480675 4318 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:26:07.486197 master-0 kubenswrapper[4318]: I1203 14:26:07.480681 4318 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480685 4318 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480689 4318 flags.go:64] FLAG: --logging-format="text" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480694 4318 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480698 4318 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480702 4318 flags.go:64] FLAG: --manifest-url="" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480707 4318 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480715 4318 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480719 4318 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480725 4318 flags.go:64] FLAG: --max-pods="110" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480729 4318 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480733 4318 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480738 4318 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480742 4318 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480746 4318 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480750 4318 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480757 4318 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480766 4318 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480771 4318 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480781 4318 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480786 4318 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480790 4318 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480799 4318 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:26:07.487331 master-0 kubenswrapper[4318]: I1203 14:26:07.480803 4318 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480808 4318 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480813 4318 flags.go:64] FLAG: --port="10250" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480817 4318 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480822 4318 flags.go:64] FLAG: --provider-id="" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480826 4318 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480831 4318 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480835 4318 flags.go:64] FLAG: --register-node="true" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480841 4318 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480845 4318 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480852 4318 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.480857 4318 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481042 4318 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481053 4318 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481484 4318 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481489 4318 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481494 4318 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481498 4318 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481502 4318 flags.go:64] FLAG: --runonce="false" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481507 4318 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481511 4318 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481515 4318 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481519 4318 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481523 4318 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481528 4318 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481532 4318 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:26:07.488276 master-0 kubenswrapper[4318]: I1203 14:26:07.481536 4318 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481540 4318 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481544 4318 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481549 4318 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481553 4318 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481557 4318 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481561 4318 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481565 4318 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481574 4318 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481579 4318 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481583 4318 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481588 4318 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481592 4318 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481596 4318 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481600 4318 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481604 4318 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481608 4318 flags.go:64] FLAG: --v="2" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481614 4318 flags.go:64] FLAG: --version="false" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481619 4318 flags.go:64] FLAG: --vmodule="" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481625 4318 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: I1203 14:26:07.481629 4318 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: W1203 14:26:07.481734 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: W1203 14:26:07.481738 4318 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: W1203 14:26:07.481742 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.489223 master-0 kubenswrapper[4318]: W1203 14:26:07.481747 4318 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481751 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481755 4318 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481758 4318 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481766 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481770 4318 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.481773 4318 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482220 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482224 4318 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482228 4318 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482232 4318 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482235 4318 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482239 4318 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482242 4318 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482246 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482250 4318 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482253 4318 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482258 4318 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482263 4318 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.490281 master-0 kubenswrapper[4318]: W1203 14:26:07.482268 4318 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482272 4318 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482276 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482281 4318 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482285 4318 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482289 4318 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482292 4318 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482296 4318 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482299 4318 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482303 4318 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482306 4318 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482310 4318 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482313 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482317 4318 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482320 4318 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482326 4318 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482330 4318 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482333 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482336 4318 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482340 4318 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482344 4318 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.491094 master-0 kubenswrapper[4318]: W1203 14:26:07.482348 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482351 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482355 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482358 4318 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482361 4318 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482365 4318 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482368 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482372 4318 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482376 4318 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482379 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482382 4318 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482387 4318 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482391 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482395 4318 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482399 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482404 4318 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482409 4318 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482413 4318 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482417 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.492733 master-0 kubenswrapper[4318]: W1203 14:26:07.482421 4318 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482425 4318 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482429 4318 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482433 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482437 4318 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482440 4318 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482444 4318 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482452 4318 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482456 4318 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.482459 4318 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: I1203 14:26:07.482468 4318 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: I1203 14:26:07.492058 4318 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: I1203 14:26:07.492263 4318 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.492339 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.492348 4318 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.493596 master-0 kubenswrapper[4318]: W1203 14:26:07.492354 4318 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492359 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492364 4318 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492368 4318 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492372 4318 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492375 4318 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492379 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492383 4318 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492387 4318 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492391 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492394 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492398 4318 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492402 4318 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492407 4318 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492411 4318 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492414 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492418 4318 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492421 4318 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492425 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492429 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.494196 master-0 kubenswrapper[4318]: W1203 14:26:07.492432 4318 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492437 4318 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492441 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492444 4318 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492448 4318 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492454 4318 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492458 4318 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492461 4318 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492465 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492470 4318 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492474 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492477 4318 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492481 4318 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492485 4318 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492488 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492493 4318 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492497 4318 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492501 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492505 4318 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.495353 master-0 kubenswrapper[4318]: W1203 14:26:07.492511 4318 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492515 4318 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492519 4318 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492522 4318 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492527 4318 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492531 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492534 4318 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492539 4318 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492542 4318 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492546 4318 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492549 4318 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492553 4318 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492556 4318 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492560 4318 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492564 4318 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492567 4318 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492572 4318 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492576 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492581 4318 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492585 4318 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.496274 master-0 kubenswrapper[4318]: W1203 14:26:07.492588 4318 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492592 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492599 4318 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492603 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492607 4318 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492611 4318 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492615 4318 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492619 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492622 4318 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492626 4318 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492630 4318 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: I1203 14:26:07.492636 4318 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492807 4318 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492817 4318 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492822 4318 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.496967 master-0 kubenswrapper[4318]: W1203 14:26:07.492826 4318 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492830 4318 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492834 4318 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492838 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492842 4318 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492846 4318 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492850 4318 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492855 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492859 4318 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492863 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492866 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492870 4318 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492874 4318 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492877 4318 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492882 4318 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492886 4318 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492891 4318 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492895 4318 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.492899 4318 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.497863 master-0 kubenswrapper[4318]: W1203 14:26:07.493279 4318 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493289 4318 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493294 4318 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493298 4318 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493304 4318 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493308 4318 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493312 4318 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493316 4318 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493319 4318 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493323 4318 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493327 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493330 4318 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493334 4318 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493338 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493341 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493345 4318 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493348 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493352 4318 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493357 4318 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493361 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.500105 master-0 kubenswrapper[4318]: W1203 14:26:07.493364 4318 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.493368 4318 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.493372 4318 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.493375 4318 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.493379 4318 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494106 4318 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494119 4318 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494123 4318 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494128 4318 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494133 4318 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494137 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494141 4318 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494145 4318 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494149 4318 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494154 4318 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494158 4318 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494162 4318 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494169 4318 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494174 4318 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494178 4318 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.500929 master-0 kubenswrapper[4318]: W1203 14:26:07.494185 4318 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494189 4318 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494193 4318 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494198 4318 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494205 4318 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494209 4318 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494213 4318 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494217 4318 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494221 4318 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: W1203 14:26:07.494225 4318 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: I1203 14:26:07.494233 4318 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: I1203 14:26:07.494411 4318 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: I1203 14:26:07.496204 4318 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: I1203 14:26:07.496283 4318 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:26:07.501834 master-0 kubenswrapper[4318]: I1203 14:26:07.496492 4318 server.go:997] "Starting client certificate rotation" Dec 03 14:26:07.502496 master-0 kubenswrapper[4318]: I1203 14:26:07.496501 4318 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:26:07.502496 master-0 kubenswrapper[4318]: I1203 14:26:07.496800 4318 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 08:42:00.517971094 +0000 UTC Dec 03 14:26:07.502496 master-0 kubenswrapper[4318]: I1203 14:26:07.496921 4318 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h15m53.021053581s for next certificate rotation Dec 03 14:26:07.502496 master-0 kubenswrapper[4318]: I1203 14:26:07.497123 4318 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:26:07.502496 master-0 kubenswrapper[4318]: I1203 14:26:07.499550 4318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:26:07.507898 master-0 kubenswrapper[4318]: I1203 14:26:07.507852 4318 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:26:07.513836 master-0 kubenswrapper[4318]: I1203 14:26:07.513793 4318 log.go:25] "Validated CRI v1 image API" Dec 03 14:26:07.518772 master-0 kubenswrapper[4318]: I1203 14:26:07.518687 4318 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:26:07.528300 master-0 kubenswrapper[4318]: I1203 14:26:07.528235 4318 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:26:07.529128 master-0 kubenswrapper[4318]: I1203 14:26:07.528287 4318 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm major:0 minor:343 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm major:0 minor:354 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm major:0 minor:290 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm major:0 minor:383 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm major:0 minor:52 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm major:0 minor:372 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm major:0 minor:332 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm major:0 minor:246 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm major:0 minor:221 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm major:0 minor:60 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9:{mountpoint:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m:{mountpoint:/var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl:{mountpoint:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl major:0 minor:341 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj major:0 minor:330 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j major:0 minor:321 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb:{mountpoint:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb major:0 minor:331 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6:{mountpoint:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6 major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2:{mountpoint:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 major:0 minor:334 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t:{mountpoint:/var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4:{mountpoint:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg:{mountpoint:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q:{mountpoint:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls major:0 minor:364 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn:{mountpoint:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn major:0 minor:324 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:369 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:322 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj:{mountpoint:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj major:0 minor:325 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:329 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:323 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx major:0 minor:365 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:207 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/54a2ca96b854d7cfeeb408b44caf1bba58af8b38dc5b28680252d8983b5074d5/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-104:{mountpoint:/var/lib/containers/storage/overlay/7bc7920e5c5bed2b596a520147c4e3c84a11dfb35bc62abcd7cad2582980ae4d/merged major:0 minor:104 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/e8d5036712eaaf52e8b303ae3a0ad06094cdc4985b0b8fb09869b40a2d4b8e56/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/19c1abe7a997eb037d55d5145fe86648c2db90813114b12c9c7652c0f1d1f1e7/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/8bfb67c7576d13e90076192677aa30b86c5042190c5b6305f1789f69da101ecb/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/59903b5296fedf6c0f24ab6f5abe6f7d0e7b646a02e422c69a53c699d926fcae/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/4a8abba7f7a89d47f29b0fd494c499fad5635f4df1f05a39f6f0c54b9f98058f/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/0e6c123e49cbfc8ad2310c6aa2234cd86fe197e0979eeed90c9035eb51ef7a31/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/fff3da91b4014c713106ce07bf08a1c3459095fccf3f23fd44eb2219a0aa9423/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/46179b738d40d92cbfc79d3828e38aadf355cbc146a4dd58186db60a45867e03/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/56bd511792ab0d8730fa7addae37d28e9ca7e134e915c0ab16fcf4f1693cbc53/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/8e3259ff45bb019242db4b7cce93dc1f95123597443c95e7df078cdb35b562e1/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/8a8544b208887711d779c3cfa299a2947a0ffd69603963cd25ad724cf8d9eeca/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/bdb7a11dff26b5d1d84e830fc9c06adbe09df14aecbec250545cf3f3e279a688/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/7c8f1a3b6457dc2f18634fe466778d995b421ba3f8744b45230fa9f530686f16/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/9186d32c1831fd31a97eaeef51bea195ad2d43197748ad4a41ed0f3d0a6ee9b0/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/6adc24b4424c655c64aac023a5b302caac5a8f11e615c9fbbaae99e20f710e17/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-239:{mountpoint:/var/lib/containers/storage/overlay/d25383c868b18ce9e4c7f658d4c7a43e0defc6249bd51aa636cfb461d08af14f/merged major:0 minor:239 fsType:overlay blockSize:0} overlay_0-241:{mountpoint:/var/lib/containers/storage/overlay/6118f35558c7664fe3716f7a462fcfc15db7caa767ff8247d8b984f03970ea91/merged major:0 minor:241 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/a49a3e5914e13a132d03cad2d1862009a7dcdd29ffd9baf1aee3ce827dfdb033/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/3a617a255aeaaedbf90e1b05df4940ff7d4a7da6d70625dae81a8c944d7ec8ed/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-254:{mountpoint:/var/lib/containers/storage/overlay/c6f41e97e4cf8b0b54abdf6a183e2baa9bdf2aa9c8a2910d19bc0c05ccdfa615/merged major:0 minor:254 fsType:overlay blockSize:0} overlay_0-256:{mountpoint:/var/lib/containers/storage/overlay/fb6726fdfb38a0b5e3e7e775d27c586ca96e01fe6b7711ba33c2cd0565b1a736/merged major:0 minor:256 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/76b7976a0f6e3651e1ba646f6a5371dd22406a456acb02c5ec8b0625a841ccb9/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/b45fc4c5fdf18ac4e7662cbb3c67b8de6acb76eb91a9ee784ad2df94e3a7e0b5/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/36739f71a4fea6ea2c2709e345388c28447d9da6c43b115938c0871e0c96c7ce/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/0c7448af4269f7e887ef7ee1cdbbdbf5800491a751ac364aaa1152d5fb1a2522/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/c3a7482c4dffd1b208a101d86d94587d93f9ac4e298a30f4465f8430b08bf171/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/a5c613ebc746bcd0b0ec54e931b5fcdd48b4948f8959dd9fccf64862a514954e/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/1a870f0091c53ce7e68100f825081dc06532a7acc04ab2476e1572799644ac15/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/858f5caa75e09a978426dc3662173626cc8f9d1dd9077d229a06249b7b2dd6a7/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/ebc3de299fa9e1ecb9d91fc239c2e2329f323489ae1fe0a7f0925f30b743c8b5/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/f840473344b63f1ec7fb75ec5759b754069756ac31f987393f5a9135084406c3/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/3947d1d2207201c84046f22dbc82817ed65db35d7111517cbd0a986b06b28769/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/90e25eaa2e6ed82a1c70dca354fa2f836e4515fd04760de1988b7de403165eb4/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/9faee4ccfe835e34371ff4f14f36dfc1bf10c24b0604397d7204cba02fb46b32/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/01751de47ddf014ec871d5fb8a041fd4cae16069da06a92e0a79023108052984/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/76ed65b706b696363076e37e748c69776994125833e2f378300e5cebce80c0a5/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/9cc2f78d3f18f94969060aba283ecb87b8ed8559f05ddf17b1eb0048e8d84884/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/2729f8b2e58137fcc6c6774811ee683758fd04bc0b2bfe6df070685fe3b70f59/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/e9e01346c8a2d7241f49e4f4bd57e7b080c98e1a22de89ceed59f2602c2e07fd/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/c9cd2993bf4a08eb6b65697ef842284a9292e73a56a8ad258f599bd71ea1ece7/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/c6bc034fda31cdd9a7701e912b1a0874cec01ca6b627c8a68c9d962b089f05ca/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/f4ba6c4c0348a08b33f5544dfede0ee96ee2fcdb7ba6fdbded84f4804e9762d0/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/7e01f14a15a077455419152f299985b02d0c419279664230cfaa1aed75464d5f/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/be4fbba248c74bf90d2d81b6f5403858f257dbd9a3fc20914f8d64fd13c1df4b/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/eabb4d6eef50e22a4f75fb87f976b587c99935ba0aca8d1c496e6ec92bd4fe55/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/500249d54611270d3d294a32f479ad3438fa208292aaa617bece31bf8e6b2abd/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/c463fd1eaa631b4b96e68db0523fdf31d609f91a4e9140c374b8fdfb0a63f10c/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/54dec5ba4f99898136f31bb58b3153e7c4fba7d49c8bcaa492353ea5f8a35367/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/5d9dc9c4a95a9f40862e419c14922df59496dd5c0062aabd17e4ad1375aa4dff/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/d8c87f1adedcbb1d63b45d2be0787b7a59ef5655db7bfd2cfd68c704413ce6f8/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/88e20c22083052c1ad725846418a2c9a921aaf6152b2580a07bea91e95c627c1/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/0e52c31c91a8d1ec1ca1c60896eb3aba79db61eb55ff5de9e1c06c0c62e42727/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/eb8931bd18b02778ad54a37066f2719d184c69759b32663df6abc759fa233fda/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/97e74ec6f67fbf677ffdf637c333af3856cb09f0cd53e8daa07e58662632ac0e/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/9f2afc1d15e1ba68d51fd13405955f4632590c1fb67008d69b95b439a57a0a2e/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/b3358ac5358f40f9c02ee05cee6c13403ff18226aba1a2a495c9696f44206cf1/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/80bf0d219bcbddf1ac7e9436e363fff144c45269170168d0c65f3d48e07da041/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/aed1aab4eb1eb899582dcc2612091c9ae1bbb4d177753cc631076c50bc5658f0/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/ab13992a32e65958e37796b32c2d69a96c03c0cc1c7b0e8729e407591e614586/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/ff1abe9cbc07c89f935f87d8363c9e72abfb9d8cf3350b708cc7faaffdfc3290/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/24c2646ef35171433357fd7c7c43629e43f1e1e7c662b89d1b5abde855a87d87/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/5a5356576965519a7d664f1f0f90f7c5ed07240c1c9f95020e9728f5b02f1adf/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/ecebccce8430047e5db0b7b50f8180e353879a387a43a506e6aa7bb0c8b3996f/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/ff7eebf5b23c55d79ccc0438fae54f44f0850a75a71d438f23d42d25985140b3/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/b09896e7dfa2a083ca8a17483668de03a9bf943e0ecf3b062c2b5b24643d47c1/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/354bdc37d023fac0660eb8c2be876ec1bcfdc92777778f0486e48ea3b6c1e6e3/merged major:0 minor:94 fsType:overlay blockSize:0}] Dec 03 14:26:07.555230 master-0 kubenswrapper[4318]: I1203 14:26:07.554627 4318 manager.go:217] Machine: {Timestamp:2025-12-03 14:26:07.553643712 +0000 UTC m=+0.117311538 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5a54df78-64a7-4b65-a168-d6e871bf4ce7 Filesystems:[{Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:203 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn DeviceMajor:0 DeviceMinor:324 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:322 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 DeviceMajor:0 DeviceMinor:334 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls DeviceMajor:0 DeviceMinor:364 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:201 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:323 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm DeviceMajor:0 DeviceMinor:354 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:219 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:213 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm DeviceMajor:0 DeviceMinor:60 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm DeviceMajor:0 DeviceMinor:246 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:212 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj DeviceMajor:0 DeviceMinor:325 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:204 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl DeviceMajor:0 DeviceMinor:341 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:329 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm DeviceMajor:0 DeviceMinor:383 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-104 DeviceMajor:0 DeviceMinor:104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm DeviceMajor:0 DeviceMinor:290 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:366 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:284 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx DeviceMajor:0 DeviceMinor:365 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:208 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-241 DeviceMajor:0 DeviceMinor:241 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:217 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:220 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-256 DeviceMajor:0 DeviceMinor:256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:205 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:214 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm DeviceMajor:0 DeviceMinor:372 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m DeviceMajor:0 DeviceMinor:378 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:210 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:216 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:369 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm DeviceMajor:0 DeviceMinor:221 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm DeviceMajor:0 DeviceMinor:52 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:202 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j DeviceMajor:0 DeviceMinor:321 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:200 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-239 DeviceMajor:0 DeviceMinor:239 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm DeviceMajor:0 DeviceMinor:343 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh DeviceMajor:0 DeviceMinor:218 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:227 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj DeviceMajor:0 DeviceMinor:330 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb DeviceMajor:0 DeviceMinor:331 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm DeviceMajor:0 DeviceMinor:332 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:215 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:209 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:211 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-254 DeviceMajor:0 DeviceMinor:254 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6 DeviceMajor:0 DeviceMinor:375 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 DeviceMajor:0 DeviceMinor:373 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:5a:0b:7b:ac:d8:e6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:26:07.555230 master-0 kubenswrapper[4318]: I1203 14:26:07.555214 4318 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:26:07.555661 master-0 kubenswrapper[4318]: I1203 14:26:07.555403 4318 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:26:07.555824 master-0 kubenswrapper[4318]: I1203 14:26:07.555787 4318 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:26:07.556014 master-0 kubenswrapper[4318]: I1203 14:26:07.555959 4318 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:26:07.556204 master-0 kubenswrapper[4318]: I1203 14:26:07.555998 4318 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:26:07.556255 master-0 kubenswrapper[4318]: I1203 14:26:07.556222 4318 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:26:07.556255 master-0 kubenswrapper[4318]: I1203 14:26:07.556232 4318 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:26:07.556255 master-0 kubenswrapper[4318]: I1203 14:26:07.556241 4318 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:26:07.556435 master-0 kubenswrapper[4318]: I1203 14:26:07.556263 4318 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:26:07.556542 master-0 kubenswrapper[4318]: I1203 14:26:07.556528 4318 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:26:07.557360 master-0 kubenswrapper[4318]: I1203 14:26:07.557342 4318 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:26:07.557488 master-0 kubenswrapper[4318]: I1203 14:26:07.557423 4318 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:26:07.557488 master-0 kubenswrapper[4318]: I1203 14:26:07.557435 4318 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:26:07.557488 master-0 kubenswrapper[4318]: I1203 14:26:07.557454 4318 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:26:07.557488 master-0 kubenswrapper[4318]: I1203 14:26:07.557468 4318 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:26:07.557488 master-0 kubenswrapper[4318]: I1203 14:26:07.557488 4318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:26:07.558627 master-0 kubenswrapper[4318]: I1203 14:26:07.558593 4318 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:26:07.558861 master-0 kubenswrapper[4318]: I1203 14:26:07.558836 4318 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:26:07.559194 master-0 kubenswrapper[4318]: I1203 14:26:07.559171 4318 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:26:07.559317 master-0 kubenswrapper[4318]: I1203 14:26:07.559293 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559321 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559332 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559341 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559350 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559359 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559367 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:26:07.559377 master-0 kubenswrapper[4318]: I1203 14:26:07.559376 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:26:07.559554 master-0 kubenswrapper[4318]: I1203 14:26:07.559387 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:26:07.559554 master-0 kubenswrapper[4318]: I1203 14:26:07.559396 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:26:07.559554 master-0 kubenswrapper[4318]: I1203 14:26:07.559408 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:26:07.559554 master-0 kubenswrapper[4318]: I1203 14:26:07.559424 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:26:07.559554 master-0 kubenswrapper[4318]: I1203 14:26:07.559463 4318 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:26:07.559938 master-0 kubenswrapper[4318]: I1203 14:26:07.559911 4318 server.go:1280] "Started kubelet" Dec 03 14:26:07.560321 master-0 kubenswrapper[4318]: I1203 14:26:07.560050 4318 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:26:07.561338 master-0 kubenswrapper[4318]: I1203 14:26:07.560968 4318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:26:07.561338 master-0 kubenswrapper[4318]: I1203 14:26:07.561131 4318 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:26:07.562322 master-0 kubenswrapper[4318]: I1203 14:26:07.561774 4318 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:26:07.562059 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:26:07.565268 master-0 kubenswrapper[4318]: I1203 14:26:07.565137 4318 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:26:07.571544 master-0 kubenswrapper[4318]: I1203 14:26:07.571180 4318 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:26:07.571544 master-0 kubenswrapper[4318]: I1203 14:26:07.571238 4318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:26:07.571544 master-0 kubenswrapper[4318]: I1203 14:26:07.571412 4318 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 07:38:48.013203822 +0000 UTC Dec 03 14:26:07.571544 master-0 kubenswrapper[4318]: I1203 14:26:07.571450 4318 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h12m40.441756025s for next certificate rotation Dec 03 14:26:07.571887 master-0 kubenswrapper[4318]: I1203 14:26:07.571618 4318 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:26:07.571887 master-0 kubenswrapper[4318]: E1203 14:26:07.571351 4318 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Dec 03 14:26:07.572113 master-0 kubenswrapper[4318]: I1203 14:26:07.572075 4318 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:26:07.572113 master-0 kubenswrapper[4318]: I1203 14:26:07.572101 4318 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:26:07.572225 master-0 kubenswrapper[4318]: I1203 14:26:07.572168 4318 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.572676 master-0 kubenswrapper[4318]: I1203 14:26:07.572636 4318 factory.go:55] Registering systemd factory Dec 03 14:26:07.572676 master-0 kubenswrapper[4318]: I1203 14:26:07.572658 4318 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:26:07.572872 master-0 kubenswrapper[4318]: I1203 14:26:07.572847 4318 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.573310 master-0 kubenswrapper[4318]: I1203 14:26:07.573277 4318 factory.go:153] Registering CRI-O factory Dec 03 14:26:07.573310 master-0 kubenswrapper[4318]: I1203 14:26:07.573299 4318 factory.go:221] Registration of the crio container factory successfully Dec 03 14:26:07.573420 master-0 kubenswrapper[4318]: E1203 14:26:07.573325 4318 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 14:26:07.573420 master-0 kubenswrapper[4318]: I1203 14:26:07.573382 4318 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:26:07.573420 master-0 kubenswrapper[4318]: I1203 14:26:07.573410 4318 factory.go:103] Registering Raw factory Dec 03 14:26:07.573550 master-0 kubenswrapper[4318]: I1203 14:26:07.573436 4318 manager.go:1196] Started watching for new ooms in manager Dec 03 14:26:07.574664 master-0 kubenswrapper[4318]: I1203 14:26:07.573915 4318 manager.go:319] Starting recovery of all containers Dec 03 14:26:07.585883 master-0 systemd[1]: Stopping Kubernetes Kubelet... Dec 03 14:26:07.598828 master-0 systemd[1]: kubelet.service: Deactivated successfully. Dec 03 14:26:07.599070 master-0 systemd[1]: Stopped Kubernetes Kubelet. Dec 03 14:26:07.619945 master-0 systemd[1]: Starting Kubernetes Kubelet... Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 03 14:26:07.706389 master-0 kubenswrapper[4409]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 03 14:26:07.707679 master-0 kubenswrapper[4409]: I1203 14:26:07.706469 4409 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710505 4409 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710525 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710530 4409 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710534 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710539 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710543 4409 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710547 4409 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710551 4409 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710555 4409 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710560 4409 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710564 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.710544 master-0 kubenswrapper[4409]: W1203 14:26:07.710569 4409 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710573 4409 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710578 4409 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710582 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710586 4409 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710590 4409 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710593 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710597 4409 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710601 4409 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710604 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710609 4409 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710647 4409 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710653 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710658 4409 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710663 4409 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710667 4409 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710671 4409 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710676 4409 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710679 4409 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.711232 master-0 kubenswrapper[4409]: W1203 14:26:07.710705 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710709 4409 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710713 4409 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710717 4409 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710721 4409 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710724 4409 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710728 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710732 4409 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710735 4409 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710739 4409 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710743 4409 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710748 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710752 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710756 4409 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710760 4409 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710767 4409 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710772 4409 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710776 4409 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710779 4409 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710783 4409 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.711957 master-0 kubenswrapper[4409]: W1203 14:26:07.710787 4409 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710790 4409 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710794 4409 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710797 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710800 4409 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710804 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710807 4409 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710811 4409 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710814 4409 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710817 4409 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710821 4409 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710824 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710830 4409 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710834 4409 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710838 4409 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710841 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710845 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710848 4409 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710852 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710855 4409 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.713413 master-0 kubenswrapper[4409]: W1203 14:26:07.710858 4409 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: W1203 14:26:07.710862 4409 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.710962 4409 flags.go:64] FLAG: --address="0.0.0.0" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.710974 4409 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.710984 4409 flags.go:64] FLAG: --anonymous-auth="true" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.710990 4409 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.710999 4409 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711025 4409 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711044 4409 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711052 4409 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711057 4409 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711063 4409 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711070 4409 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711076 4409 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711081 4409 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711086 4409 flags.go:64] FLAG: --cgroup-root="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711091 4409 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711096 4409 flags.go:64] FLAG: --client-ca-file="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711101 4409 flags.go:64] FLAG: --cloud-config="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711105 4409 flags.go:64] FLAG: --cloud-provider="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711109 4409 flags.go:64] FLAG: --cluster-dns="[]" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711116 4409 flags.go:64] FLAG: --cluster-domain="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711121 4409 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711127 4409 flags.go:64] FLAG: --config-dir="" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711131 4409 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 03 14:26:07.715858 master-0 kubenswrapper[4409]: I1203 14:26:07.711136 4409 flags.go:64] FLAG: --container-log-max-files="5" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711146 4409 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711150 4409 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711154 4409 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711159 4409 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711164 4409 flags.go:64] FLAG: --contention-profiling="false" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711168 4409 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711172 4409 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711177 4409 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711181 4409 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711187 4409 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711192 4409 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711196 4409 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711205 4409 flags.go:64] FLAG: --enable-load-reader="false" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711209 4409 flags.go:64] FLAG: --enable-server="true" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711214 4409 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711220 4409 flags.go:64] FLAG: --event-burst="100" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711224 4409 flags.go:64] FLAG: --event-qps="50" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711228 4409 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711233 4409 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711238 4409 flags.go:64] FLAG: --eviction-hard="" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711244 4409 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711248 4409 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711253 4409 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711258 4409 flags.go:64] FLAG: --eviction-soft="" Dec 03 14:26:07.716833 master-0 kubenswrapper[4409]: I1203 14:26:07.711262 4409 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711266 4409 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711271 4409 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711276 4409 flags.go:64] FLAG: --experimental-mounter-path="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711280 4409 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711285 4409 flags.go:64] FLAG: --fail-swap-on="true" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711289 4409 flags.go:64] FLAG: --feature-gates="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711294 4409 flags.go:64] FLAG: --file-check-frequency="20s" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711299 4409 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711304 4409 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711308 4409 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711313 4409 flags.go:64] FLAG: --healthz-port="10248" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711317 4409 flags.go:64] FLAG: --help="false" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711321 4409 flags.go:64] FLAG: --hostname-override="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711325 4409 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711330 4409 flags.go:64] FLAG: --http-check-frequency="20s" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711334 4409 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711339 4409 flags.go:64] FLAG: --image-credential-provider-config="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711343 4409 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711348 4409 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711355 4409 flags.go:64] FLAG: --image-service-endpoint="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711359 4409 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711363 4409 flags.go:64] FLAG: --kube-api-burst="100" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711368 4409 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711372 4409 flags.go:64] FLAG: --kube-api-qps="50" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711376 4409 flags.go:64] FLAG: --kube-reserved="" Dec 03 14:26:07.718225 master-0 kubenswrapper[4409]: I1203 14:26:07.711381 4409 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711385 4409 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711389 4409 flags.go:64] FLAG: --kubelet-cgroups="" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711393 4409 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711397 4409 flags.go:64] FLAG: --lock-file="" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711401 4409 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711405 4409 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711410 4409 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711416 4409 flags.go:64] FLAG: --log-json-split-stream="false" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711421 4409 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711425 4409 flags.go:64] FLAG: --log-text-split-stream="false" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711429 4409 flags.go:64] FLAG: --logging-format="text" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711433 4409 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711438 4409 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711443 4409 flags.go:64] FLAG: --manifest-url="" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711447 4409 flags.go:64] FLAG: --manifest-url-header="" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711452 4409 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711457 4409 flags.go:64] FLAG: --max-open-files="1000000" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711462 4409 flags.go:64] FLAG: --max-pods="110" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711467 4409 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711471 4409 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711475 4409 flags.go:64] FLAG: --memory-manager-policy="None" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711479 4409 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711484 4409 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 03 14:26:07.720410 master-0 kubenswrapper[4409]: I1203 14:26:07.711488 4409 flags.go:64] FLAG: --node-ip="192.168.32.10" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711509 4409 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711521 4409 flags.go:64] FLAG: --node-status-max-images="50" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711526 4409 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711530 4409 flags.go:64] FLAG: --oom-score-adj="-999" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711534 4409 flags.go:64] FLAG: --pod-cidr="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711539 4409 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fff930cf757e23d388d86d05942b76e44d3bda5e387b299c239e4d12545d26dd" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711546 4409 flags.go:64] FLAG: --pod-manifest-path="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711551 4409 flags.go:64] FLAG: --pod-max-pids="-1" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711555 4409 flags.go:64] FLAG: --pods-per-core="0" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711559 4409 flags.go:64] FLAG: --port="10250" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711564 4409 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711568 4409 flags.go:64] FLAG: --provider-id="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711572 4409 flags.go:64] FLAG: --qos-reserved="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711577 4409 flags.go:64] FLAG: --read-only-port="10255" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711581 4409 flags.go:64] FLAG: --register-node="true" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711586 4409 flags.go:64] FLAG: --register-schedulable="true" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711590 4409 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711597 4409 flags.go:64] FLAG: --registry-burst="10" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711601 4409 flags.go:64] FLAG: --registry-qps="5" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711606 4409 flags.go:64] FLAG: --reserved-cpus="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711610 4409 flags.go:64] FLAG: --reserved-memory="" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711616 4409 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711621 4409 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 03 14:26:07.721648 master-0 kubenswrapper[4409]: I1203 14:26:07.711625 4409 flags.go:64] FLAG: --rotate-certificates="false" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711629 4409 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711633 4409 flags.go:64] FLAG: --runonce="false" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711638 4409 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711643 4409 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711647 4409 flags.go:64] FLAG: --seccomp-default="false" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711651 4409 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711656 4409 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711661 4409 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711665 4409 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711670 4409 flags.go:64] FLAG: --storage-driver-password="root" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711677 4409 flags.go:64] FLAG: --storage-driver-secure="false" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711682 4409 flags.go:64] FLAG: --storage-driver-table="stats" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711686 4409 flags.go:64] FLAG: --storage-driver-user="root" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711690 4409 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711695 4409 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711699 4409 flags.go:64] FLAG: --system-cgroups="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711703 4409 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711711 4409 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711716 4409 flags.go:64] FLAG: --tls-cert-file="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711720 4409 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711725 4409 flags.go:64] FLAG: --tls-min-version="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711729 4409 flags.go:64] FLAG: --tls-private-key-file="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711734 4409 flags.go:64] FLAG: --topology-manager-policy="none" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711738 4409 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 03 14:26:07.723056 master-0 kubenswrapper[4409]: I1203 14:26:07.711742 4409 flags.go:64] FLAG: --topology-manager-scope="container" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: I1203 14:26:07.711746 4409 flags.go:64] FLAG: --v="2" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: I1203 14:26:07.711751 4409 flags.go:64] FLAG: --version="false" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: I1203 14:26:07.711758 4409 flags.go:64] FLAG: --vmodule="" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: I1203 14:26:07.711763 4409 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: I1203 14:26:07.711767 4409 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711865 4409 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711871 4409 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711875 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711880 4409 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711884 4409 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711888 4409 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711891 4409 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711895 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711899 4409 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711902 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711906 4409 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711910 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711915 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711919 4409 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711923 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.724580 master-0 kubenswrapper[4409]: W1203 14:26:07.711927 4409 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711930 4409 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711935 4409 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711940 4409 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711944 4409 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711948 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711954 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711958 4409 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711963 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711967 4409 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711971 4409 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711975 4409 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711979 4409 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711982 4409 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711986 4409 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711990 4409 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.711994 4409 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.712019 4409 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.712023 4409 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.712026 4409 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.725762 master-0 kubenswrapper[4409]: W1203 14:26:07.712030 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712035 4409 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712039 4409 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712043 4409 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712047 4409 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712051 4409 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712054 4409 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712057 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712061 4409 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712073 4409 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712076 4409 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712080 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712084 4409 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712087 4409 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712091 4409 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712094 4409 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712097 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712101 4409 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712106 4409 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.727550 master-0 kubenswrapper[4409]: W1203 14:26:07.712109 4409 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712114 4409 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712119 4409 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712123 4409 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712127 4409 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712130 4409 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712134 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712138 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712141 4409 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712145 4409 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712148 4409 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712152 4409 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712155 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712159 4409 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712162 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712166 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712169 4409 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.728756 master-0 kubenswrapper[4409]: W1203 14:26:07.712180 4409 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: I1203 14:26:07.712196 4409 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: I1203 14:26:07.722147 4409 server.go:491] "Kubelet version" kubeletVersion="v1.31.13" Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: I1203 14:26:07.722194 4409 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722322 4409 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722335 4409 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722342 4409 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722350 4409 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722357 4409 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722365 4409 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722372 4409 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722380 4409 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722387 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722394 4409 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722401 4409 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722408 4409 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.729707 master-0 kubenswrapper[4409]: W1203 14:26:07.722415 4409 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722421 4409 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722428 4409 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722435 4409 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722442 4409 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722449 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722455 4409 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722462 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722469 4409 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722476 4409 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722485 4409 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722498 4409 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722505 4409 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722513 4409 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722520 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722527 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722534 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722541 4409 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722548 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.730527 master-0 kubenswrapper[4409]: W1203 14:26:07.722559 4409 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722568 4409 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722576 4409 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722587 4409 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722596 4409 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722605 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722653 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722663 4409 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722672 4409 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722679 4409 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722685 4409 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722692 4409 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722699 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722706 4409 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722713 4409 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722719 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722726 4409 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722734 4409 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722741 4409 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722748 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.731794 master-0 kubenswrapper[4409]: W1203 14:26:07.722755 4409 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722762 4409 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722769 4409 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722776 4409 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722782 4409 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722789 4409 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722798 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722805 4409 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722812 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722819 4409 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722826 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722833 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722843 4409 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722850 4409 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722856 4409 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722863 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722870 4409 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722877 4409 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722884 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722894 4409 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.732892 master-0 kubenswrapper[4409]: W1203 14:26:07.722901 4409 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: I1203 14:26:07.722913 4409 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723151 4409 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723171 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723178 4409 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723186 4409 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723194 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723202 4409 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723209 4409 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723218 4409 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723224 4409 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723231 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723238 4409 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723245 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723256 4409 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 03 14:26:07.734838 master-0 kubenswrapper[4409]: W1203 14:26:07.723264 4409 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723273 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723281 4409 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723288 4409 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723295 4409 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723302 4409 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723309 4409 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723316 4409 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723323 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723330 4409 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723337 4409 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723344 4409 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723351 4409 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723358 4409 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723365 4409 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723372 4409 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723378 4409 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723385 4409 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723394 4409 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723400 4409 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723408 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 03 14:26:07.735467 master-0 kubenswrapper[4409]: W1203 14:26:07.723415 4409 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723422 4409 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723429 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723435 4409 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723442 4409 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723448 4409 feature_gate.go:330] unrecognized feature gate: Example Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723455 4409 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723461 4409 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723467 4409 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723474 4409 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723481 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723487 4409 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723494 4409 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723501 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723507 4409 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723514 4409 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723520 4409 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723527 4409 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723536 4409 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723545 4409 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 03 14:26:07.736342 master-0 kubenswrapper[4409]: W1203 14:26:07.723553 4409 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723562 4409 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723570 4409 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723578 4409 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723585 4409 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723593 4409 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723601 4409 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723608 4409 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723616 4409 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723623 4409 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723632 4409 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723640 4409 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723648 4409 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723656 4409 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723664 4409 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723672 4409 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723679 4409 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 03 14:26:07.737211 master-0 kubenswrapper[4409]: W1203 14:26:07.723685 4409 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.723698 4409 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:false StreamingCollectionEncodingToProtobuf:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.723970 4409 server.go:940] "Client rotation is on, will bootstrap in background" Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.728082 4409 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.728201 4409 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.728467 4409 server.go:997] "Starting client certificate rotation" Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.728477 4409 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.728980 4409 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.729120 4409 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 11:06:01.597198656 +0000 UTC Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.729242 4409 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h39m53.867958794s for next certificate rotation Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.730255 4409 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 03 14:26:07.738051 master-0 kubenswrapper[4409]: I1203 14:26:07.735598 4409 log.go:25] "Validated CRI v1 runtime API" Dec 03 14:26:07.741130 master-0 kubenswrapper[4409]: I1203 14:26:07.741087 4409 log.go:25] "Validated CRI v1 image API" Dec 03 14:26:07.742218 master-0 kubenswrapper[4409]: I1203 14:26:07.742159 4409 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 03 14:26:07.746921 master-0 kubenswrapper[4409]: I1203 14:26:07.746852 4409 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 aa54a2f4-b5ca-4d31-8008-d919d7ce257a:/dev/vda3] Dec 03 14:26:07.747293 master-0 kubenswrapper[4409]: I1203 14:26:07.746908 4409 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm major:0 minor:343 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm major:0 minor:354 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm major:0 minor:290 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm major:0 minor:383 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm major:0 minor:379 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm major:0 minor:342 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm major:0 minor:52 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm major:0 minor:372 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm major:0 minor:332 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm major:0 minor:246 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm major:0 minor:286 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm major:0 minor:221 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm major:0 minor:60 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j:{mountpoint:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j major:0 minor:284 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9:{mountpoint:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh:{mountpoint:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx:{mountpoint:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m:{mountpoint:/var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m major:0 minor:378 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl:{mountpoint:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl major:0 minor:341 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx:{mountpoint:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj:{mountpoint:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj major:0 minor:330 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j:{mountpoint:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j major:0 minor:321 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb:{mountpoint:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb major:0 minor:331 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6:{mountpoint:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6 major:0 minor:375 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9:{mountpoint:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2:{mountpoint:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 major:0 minor:334 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f major:0 minor:328 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 major:0 minor:280 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t:{mountpoint:/var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t major:0 minor:281 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4:{mountpoint:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg:{mountpoint:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg major:0 minor:266 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q:{mountpoint:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls major:0 minor:364 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn:{mountpoint:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn major:0 minor:324 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2:{mountpoint:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 major:0 minor:369 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5:{mountpoint:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 major:0 minor:322 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj:{mountpoint:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj major:0 minor:325 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 major:0 minor:329 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 major:0 minor:323 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8:{mountpoint:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 major:0 minor:277 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert major:0 minor:210 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx major:0 minor:365 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:207 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/54a2ca96b854d7cfeeb408b44caf1bba58af8b38dc5b28680252d8983b5074d5/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-104:{mountpoint:/var/lib/containers/storage/overlay/7bc7920e5c5bed2b596a520147c4e3c84a11dfb35bc62abcd7cad2582980ae4d/merged major:0 minor:104 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/e8d5036712eaaf52e8b303ae3a0ad06094cdc4985b0b8fb09869b40a2d4b8e56/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/19c1abe7a997eb037d55d5145fe86648c2db90813114b12c9c7652c0f1d1f1e7/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/8bfb67c7576d13e90076192677aa30b86c5042190c5b6305f1789f69da101ecb/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/59903b5296fedf6c0f24ab6f5abe6f7d0e7b646a02e422c69a53c699d926fcae/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/4a8abba7f7a89d47f29b0fd494c499fad5635f4df1f05a39f6f0c54b9f98058f/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/0e6c123e49cbfc8ad2310c6aa2234cd86fe197e0979eeed90c9035eb51ef7a31/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/fff3da91b4014c713106ce07bf08a1c3459095fccf3f23fd44eb2219a0aa9423/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-180:{mountpoint:/var/lib/containers/storage/overlay/46179b738d40d92cbfc79d3828e38aadf355cbc146a4dd58186db60a45867e03/merged major:0 minor:180 fsType:overlay blockSize:0} overlay_0-185:{mountpoint:/var/lib/containers/storage/overlay/56bd511792ab0d8730fa7addae37d28e9ca7e134e915c0ab16fcf4f1693cbc53/merged major:0 minor:185 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/8e3259ff45bb019242db4b7cce93dc1f95123597443c95e7df078cdb35b562e1/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/8a8544b208887711d779c3cfa299a2947a0ffd69603963cd25ad724cf8d9eeca/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/bdb7a11dff26b5d1d84e830fc9c06adbe09df14aecbec250545cf3f3e279a688/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-225:{mountpoint:/var/lib/containers/storage/overlay/7c8f1a3b6457dc2f18634fe466778d995b421ba3f8744b45230fa9f530686f16/merged major:0 minor:225 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/9186d32c1831fd31a97eaeef51bea195ad2d43197748ad4a41ed0f3d0a6ee9b0/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-230:{mountpoint:/var/lib/containers/storage/overlay/6adc24b4424c655c64aac023a5b302caac5a8f11e615c9fbbaae99e20f710e17/merged major:0 minor:230 fsType:overlay blockSize:0} overlay_0-239:{mountpoint:/var/lib/containers/storage/overlay/d25383c868b18ce9e4c7f658d4c7a43e0defc6249bd51aa636cfb461d08af14f/merged major:0 minor:239 fsType:overlay blockSize:0} overlay_0-241:{mountpoint:/var/lib/containers/storage/overlay/6118f35558c7664fe3716f7a462fcfc15db7caa767ff8247d8b984f03970ea91/merged major:0 minor:241 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/a49a3e5914e13a132d03cad2d1862009a7dcdd29ffd9baf1aee3ce827dfdb033/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/3a617a255aeaaedbf90e1b05df4940ff7d4a7da6d70625dae81a8c944d7ec8ed/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-254:{mountpoint:/var/lib/containers/storage/overlay/c6f41e97e4cf8b0b54abdf6a183e2baa9bdf2aa9c8a2910d19bc0c05ccdfa615/merged major:0 minor:254 fsType:overlay blockSize:0} overlay_0-256:{mountpoint:/var/lib/containers/storage/overlay/fb6726fdfb38a0b5e3e7e775d27c586ca96e01fe6b7711ba33c2cd0565b1a736/merged major:0 minor:256 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/76b7976a0f6e3651e1ba646f6a5371dd22406a456acb02c5ec8b0625a841ccb9/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-275:{mountpoint:/var/lib/containers/storage/overlay/b45fc4c5fdf18ac4e7662cbb3c67b8de6acb76eb91a9ee784ad2df94e3a7e0b5/merged major:0 minor:275 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/36739f71a4fea6ea2c2709e345388c28447d9da6c43b115938c0871e0c96c7ce/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-288:{mountpoint:/var/lib/containers/storage/overlay/0c7448af4269f7e887ef7ee1cdbbdbf5800491a751ac364aaa1152d5fb1a2522/merged major:0 minor:288 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/c3a7482c4dffd1b208a101d86d94587d93f9ac4e298a30f4465f8430b08bf171/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/a5c613ebc746bcd0b0ec54e931b5fcdd48b4948f8959dd9fccf64862a514954e/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/1a870f0091c53ce7e68100f825081dc06532a7acc04ab2476e1572799644ac15/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/858f5caa75e09a978426dc3662173626cc8f9d1dd9077d229a06249b7b2dd6a7/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/ebc3de299fa9e1ecb9d91fc239c2e2329f323489ae1fe0a7f0925f30b743c8b5/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/f840473344b63f1ec7fb75ec5759b754069756ac31f987393f5a9135084406c3/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-326:{mountpoint:/var/lib/containers/storage/overlay/3947d1d2207201c84046f22dbc82817ed65db35d7111517cbd0a986b06b28769/merged major:0 minor:326 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/90e25eaa2e6ed82a1c70dca354fa2f836e4515fd04760de1988b7de403165eb4/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/9faee4ccfe835e34371ff4f14f36dfc1bf10c24b0604397d7204cba02fb46b32/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/01751de47ddf014ec871d5fb8a041fd4cae16069da06a92e0a79023108052984/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/76ed65b706b696363076e37e748c69776994125833e2f378300e5cebce80c0a5/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-350:{mountpoint:/var/lib/containers/storage/overlay/9cc2f78d3f18f94969060aba283ecb87b8ed8559f05ddf17b1eb0048e8d84884/merged major:0 minor:350 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/2729f8b2e58137fcc6c6774811ee683758fd04bc0b2bfe6df070685fe3b70f59/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/e9e01346c8a2d7241f49e4f4bd57e7b080c98e1a22de89ceed59f2602c2e07fd/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-358:{mountpoint:/var/lib/containers/storage/overlay/c9cd2993bf4a08eb6b65697ef842284a9292e73a56a8ad258f599bd71ea1ece7/merged major:0 minor:358 fsType:overlay blockSize:0} overlay_0-360:{mountpoint:/var/lib/containers/storage/overlay/c6bc034fda31cdd9a7701e912b1a0874cec01ca6b627c8a68c9d962b089f05ca/merged major:0 minor:360 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/f4ba6c4c0348a08b33f5544dfede0ee96ee2fcdb7ba6fdbded84f4804e9762d0/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/7e01f14a15a077455419152f299985b02d0c419279664230cfaa1aed75464d5f/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-370:{mountpoint:/var/lib/containers/storage/overlay/be4fbba248c74bf90d2d81b6f5403858f257dbd9a3fc20914f8d64fd13c1df4b/merged major:0 minor:370 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/eabb4d6eef50e22a4f75fb87f976b587c99935ba0aca8d1c496e6ec92bd4fe55/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-381:{mountpoint:/var/lib/containers/storage/overlay/500249d54611270d3d294a32f479ad3438fa208292aaa617bece31bf8e6b2abd/merged major:0 minor:381 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/c463fd1eaa631b4b96e68db0523fdf31d609f91a4e9140c374b8fdfb0a63f10c/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-387:{mountpoint:/var/lib/containers/storage/overlay/54dec5ba4f99898136f31bb58b3153e7c4fba7d49c8bcaa492353ea5f8a35367/merged major:0 minor:387 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/5d9dc9c4a95a9f40862e419c14922df59496dd5c0062aabd17e4ad1375aa4dff/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/d8c87f1adedcbb1d63b45d2be0787b7a59ef5655db7bfd2cfd68c704413ce6f8/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/88e20c22083052c1ad725846418a2c9a921aaf6152b2580a07bea91e95c627c1/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/0e52c31c91a8d1ec1ca1c60896eb3aba79db61eb55ff5de9e1c06c0c62e42727/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/eb8931bd18b02778ad54a37066f2719d184c69759b32663df6abc759fa233fda/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/97e74ec6f67fbf677ffdf637c333af3856cb09f0cd53e8daa07e58662632ac0e/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/9f2afc1d15e1ba68d51fd13405955f4632590c1fb67008d69b95b439a57a0a2e/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/b3358ac5358f40f9c02ee05cee6c13403ff18226aba1a2a495c9696f44206cf1/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/80bf0d219bcbddf1ac7e9436e363fff144c45269170168d0c65f3d48e07da041/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/aed1aab4eb1eb899582dcc2612091c9ae1bbb4d177753cc631076c50bc5658f0/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/ab13992a32e65958e37796b32c2d69a96c03c0cc1c7b0e8729e407591e614586/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/ff1abe9cbc07c89f935f87d8363c9e72abfb9d8cf3350b708cc7faaffdfc3290/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/24c2646ef35171433357fd7c7c43629e43f1e1e7c662b89d1b5abde855a87d87/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/5a5356576965519a7d664f1f0f90f7c5ed07240c1c9f95020e9728f5b02f1adf/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/ecebccce8430047e5db0b7b50f8180e353879a387a43a506e6aa7bb0c8b3996f/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/ff7eebf5b23c55d79ccc0438fae54f44f0850a75a71d438f23d42d25985140b3/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/b09896e7dfa2a083ca8a17483668de03a9bf943e0ecf3b062c2b5b24643d47c1/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/354bdc37d023fac0660eb8c2be876ec1bcfdc92777778f0486e48ea3b6c1e6e3/merged major:0 minor:94 fsType:overlay blockSize:0}] Dec 03 14:26:07.770407 master-0 kubenswrapper[4409]: I1203 14:26:07.769724 4409 manager.go:217] Machine: {Timestamp:2025-12-03 14:26:07.768431983 +0000 UTC m=+0.095494509 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:125cf0c5ec044a7d965cb7c651a8c69c SystemUUID:125cf0c5-ec04-4a7d-965c-b7c651a8c69c BootID:5a54df78-64a7-4b65-a168-d6e871bf4ce7 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:204 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-256 DeviceMajor:0 DeviceMinor:256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:203 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-358 DeviceMajor:0 DeviceMinor:358 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-104 DeviceMajor:0 DeviceMinor:104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:212 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/6ef37bba-85d9-4303-80c0-aac3dc49d3d9/volumes/kubernetes.io~projected/kube-api-access-kcpv9 DeviceMajor:0 DeviceMinor:220 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/4669137a-fbc4-41e1-8eeb-5f06b9da2641/volumes/kubernetes.io~projected/kube-api-access-7cvkj DeviceMajor:0 DeviceMinor:330 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:202 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-254 DeviceMajor:0 DeviceMinor:254 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e9a05d7e90961d3ec6cbb53a2f6778df05333d4e8cc9a5bd075681da79a0b02a/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes/kubernetes.io~projected/kube-api-access-fdh5m DeviceMajor:0 DeviceMinor:378 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:209 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1553e2ce0b8aa3779929d981198c5f8e351fb2223ae3b8db12f84bf0c538530/userdata/shm DeviceMajor:0 DeviceMinor:221 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-326 DeviceMajor:0 DeviceMinor:326 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e/volumes/kubernetes.io~projected/kube-api-access-mq4w9 DeviceMajor:0 DeviceMinor:373 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:217 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~projected/kube-api-access-cbch4 DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39c9f0dfeed7d76d7f59b17491dbd28d580985c222f4ff23f224fd31af206304/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-350 DeviceMajor:0 DeviceMinor:350 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae87bf7cb8d43cc7af4db2746d00b55e741a737f2fb65f21d10e49335d115764/userdata/shm DeviceMajor:0 DeviceMinor:246 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ab57c9265951a18e809b6f066faf003d5286c2afed47c5f58a5c1c947b6a420c/userdata/shm DeviceMajor:0 DeviceMinor:332 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:200 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-239 DeviceMajor:0 DeviceMinor:239 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/19c2a40b-213c-42f1-9459-87c2e780a75f/volumes/kubernetes.io~projected/kube-api-access-mbdtx DeviceMajor:0 DeviceMinor:244 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d/volumes/kubernetes.io~projected/kube-api-access-xhhw8 DeviceMajor:0 DeviceMinor:277 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/4df2889c-99f7-402a-9d50-18ccf427179c/volumes/kubernetes.io~projected/kube-api-access-lpl5j DeviceMajor:0 DeviceMinor:321 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-180 DeviceMajor:0 DeviceMinor:180 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d7d6a05e-beee-40e9-b376-5c22e285b27a/volumes/kubernetes.io~projected/kube-api-access-l6zfj DeviceMajor:0 DeviceMinor:325 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~projected/kube-api-access-8wc6r DeviceMajor:0 DeviceMinor:227 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~projected/kube-api-access-tqqf2 DeviceMajor:0 DeviceMinor:245 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/aa169e84-880b-4e6d-aeee-7ebfa1f613d2/volumes/kubernetes.io~projected/kube-api-access-97xsn DeviceMajor:0 DeviceMinor:324 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e54b0fdb82f3508a1e2216d67eb4d6445779675c411d290c0897ebadc06cd75/userdata/shm DeviceMajor:0 DeviceMinor:343 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-370 DeviceMajor:0 DeviceMinor:370 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-387 DeviceMajor:0 DeviceMinor:387 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bcc78129-4a81-410e-9a42-b12043b5a75a/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:219 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95609e7405ecf2488eee091df35fdf39a681f30263d17ad35c7bd8f8103628b4/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/25ccaeca90add0c706d8f0829780af88415d508dddddfc88bac8dc752927d5ca/userdata/shm DeviceMajor:0 DeviceMinor:354 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~projected/kube-api-access-z96q6 DeviceMajor:0 DeviceMinor:375 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7676de971fc917a431fb45dcb1aa562dc1c01388c248219887d92ca4dbdcf286/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8c6fa89f-268c-477b-9f04-238d2305cc89/volumes/kubernetes.io~projected/kube-api-access-955zg DeviceMajor:0 DeviceMinor:266 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~projected/kube-api-access-wm96f DeviceMajor:0 DeviceMinor:328 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/74e39dce-29d5-4b2a-ab19-386b6cdae94d/volumes/kubernetes.io~projected/kube-api-access-w7lp2 DeviceMajor:0 DeviceMinor:334 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:215 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-275 DeviceMajor:0 DeviceMinor:275 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-288 DeviceMajor:0 DeviceMinor:288 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6935a3f8-723e-46e6-8498-483f34bf0825/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:201 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:232 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~projected/kube-api-access-57rrp DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-225 DeviceMajor:0 DeviceMinor:225 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8d86a8a42eb4089dbbfc1b7a8e71e3ff69f98509b075ddb0a4b202d1a66b166a/userdata/shm DeviceMajor:0 DeviceMinor:342 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d838c1a-22e2-4096-9739-7841ef7d06ba/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/799e819f-f4b2-4ac9-8fa4-7d4da7a79285/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:216 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~projected/kube-api-access-szdzx DeviceMajor:0 DeviceMinor:365 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/77430348-b53a-4898-8047-be8bb542a0a7/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:205 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/829d285f-d532-45e4-b1ec-54adbc21b9f9/volumes/kubernetes.io~projected/kube-api-access-wd79t DeviceMajor:0 DeviceMinor:281 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c/userdata/shm DeviceMajor:0 DeviceMinor:52 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~projected/kube-api-access-5jtgh DeviceMajor:0 DeviceMinor:218 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-241 DeviceMajor:0 DeviceMinor:241 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:211 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-360 DeviceMajor:0 DeviceMinor:360 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e97e1725-cb55-4ce3-952d-a4fd0731577d/volumes/kubernetes.io~projected/kube-api-access-9hpt5 DeviceMajor:0 DeviceMinor:323 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-381 DeviceMajor:0 DeviceMinor:381 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~projected/kube-api-access-hnrdd DeviceMajor:0 DeviceMinor:233 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04e9e2a5-cdc2-42af-ab2c-49525390be6d/volumes/kubernetes.io~projected/kube-api-access-2dv7j DeviceMajor:0 DeviceMinor:284 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30c5c8231a2fdc6c1f1bdd2a7120fa3fda5992d6d6fbd55a2aa6bbfd4a61e976/userdata/shm DeviceMajor:0 DeviceMinor:290 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:366 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/15782f65-35d2-4e95-bf49-81541c683ffe/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:214 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/434fc477c5457e60087acc76813fd72cb27de054bff9c189548ffe99c435340c/userdata/shm DeviceMajor:0 DeviceMinor:383 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-230 DeviceMajor:0 DeviceMinor:230 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a9b62b2f-1e7a-4f1b-a988-4355d93dda46/volumes/kubernetes.io~projected/kube-api-access-gsjls DeviceMajor:0 DeviceMinor:364 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a0a1f9a9b7b1f0d057b8d078fb3aea2055d28e6a2f970bd4e3d5f6e6a6fd91d6/userdata/shm DeviceMajor:0 DeviceMinor:372 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ec89938d-35a5-46ba-8c63-12489db18cbd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:210 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b71ac8a5-987d-4eba-8bc0-a091f0a0de16/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:213 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82a4b6d7b88855ff7bcea4e18ae25c43195e22314ee0986b90cd47c57540e2f4/userdata/shm DeviceMajor:0 DeviceMinor:379 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-185 DeviceMajor:0 DeviceMinor:185 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eecc43f5-708f-4395-98cc-696b243d6321/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8eee1d96-2f58-41a6-ae51-c158b29fc813/volumes/kubernetes.io~projected/kube-api-access-p667q DeviceMajor:0 DeviceMinor:243 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/44af6af5-cecb-4dc4-b793-e8e350f8a47d/volumes/kubernetes.io~projected/kube-api-access-kk4tx DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ccac17978b39132cce8fcff33ef9cceb6f892855db54a3158e01072c992a100f/userdata/shm DeviceMajor:0 DeviceMinor:286 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b681889-eb2c-41fb-a1dc-69b99227b45b/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c777c9de-1ace-46be-b5c2-c71d252f53f4/volumes/kubernetes.io~projected/kube-api-access-k5fn5 DeviceMajor:0 DeviceMinor:322 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~projected/kube-api-access-gqnb7 DeviceMajor:0 DeviceMinor:329 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/da583723-b3ad-4a6f-b586-09b739bd7f8c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:208 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/42c95e54-b4ba-4b19-a97c-abcec840ac5d/volumes/kubernetes.io~projected/kube-api-access-b6tjl DeviceMajor:0 DeviceMinor:341 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b3c1ebb9-f052-410b-a999-45e9b75b0e58/volumes/kubernetes.io~projected/kube-api-access-mvzf2 DeviceMajor:0 DeviceMinor:369 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516/userdata/shm DeviceMajor:0 DeviceMinor:60 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/886cf93b2c85e64717ec808b21d9c098b044ad85e5fdff64839ab20e39357751/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/8a12409a-0be3-4023-9df3-a0f091aac8dc/volumes/kubernetes.io~projected/kube-api-access-wddf4 DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/56649bd4-ac30-4a70-8024-772294fede88/volumes/kubernetes.io~projected/kube-api-access-cjpnb DeviceMajor:0 DeviceMinor:331 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9e:de:c8:42:31:30 Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:27:5c:3d Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:c5:a0:b6 Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:5a:0b:7b:ac:d8:e6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 03 14:26:07.770407 master-0 kubenswrapper[4409]: I1203 14:26:07.770390 4409 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 03 14:26:07.770823 master-0 kubenswrapper[4409]: I1203 14:26:07.770788 4409 manager.go:233] Version: {KernelVersion:5.14.0-427.97.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202511041748-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 03 14:26:07.771208 master-0 kubenswrapper[4409]: I1203 14:26:07.771180 4409 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 03 14:26:07.771428 master-0 kubenswrapper[4409]: I1203 14:26:07.771378 4409 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 03 14:26:07.771677 master-0 kubenswrapper[4409]: I1203 14:26:07.771423 4409 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 03 14:26:07.771721 master-0 kubenswrapper[4409]: I1203 14:26:07.771704 4409 topology_manager.go:138] "Creating topology manager with none policy" Dec 03 14:26:07.771721 master-0 kubenswrapper[4409]: I1203 14:26:07.771717 4409 container_manager_linux.go:303] "Creating device plugin manager" Dec 03 14:26:07.771778 master-0 kubenswrapper[4409]: I1203 14:26:07.771728 4409 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:26:07.771778 master-0 kubenswrapper[4409]: I1203 14:26:07.771755 4409 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 03 14:26:07.771829 master-0 kubenswrapper[4409]: I1203 14:26:07.771799 4409 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:26:07.771911 master-0 kubenswrapper[4409]: I1203 14:26:07.771895 4409 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 03 14:26:07.771982 master-0 kubenswrapper[4409]: I1203 14:26:07.771968 4409 kubelet.go:418] "Attempting to sync node with API server" Dec 03 14:26:07.772389 master-0 kubenswrapper[4409]: I1203 14:26:07.771990 4409 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 03 14:26:07.772435 master-0 kubenswrapper[4409]: I1203 14:26:07.772399 4409 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 03 14:26:07.772435 master-0 kubenswrapper[4409]: I1203 14:26:07.772419 4409 kubelet.go:324] "Adding apiserver pod source" Dec 03 14:26:07.772435 master-0 kubenswrapper[4409]: I1203 14:26:07.772433 4409 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 03 14:26:07.773722 master-0 kubenswrapper[4409]: I1203 14:26:07.773682 4409 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-2.rhaos4.18.git15789b8.el9" apiVersion="v1" Dec 03 14:26:07.773945 master-0 kubenswrapper[4409]: I1203 14:26:07.773929 4409 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 03 14:26:07.776505 master-0 kubenswrapper[4409]: I1203 14:26:07.776447 4409 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 03 14:26:07.778144 master-0 kubenswrapper[4409]: I1203 14:26:07.778061 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 03 14:26:07.779344 master-0 kubenswrapper[4409]: I1203 14:26:07.779299 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 03 14:26:07.779344 master-0 kubenswrapper[4409]: I1203 14:26:07.779331 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 03 14:26:07.779344 master-0 kubenswrapper[4409]: I1203 14:26:07.779339 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 03 14:26:07.779344 master-0 kubenswrapper[4409]: I1203 14:26:07.779347 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779355 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779363 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779370 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779379 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779387 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779397 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779412 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 03 14:26:07.779474 master-0 kubenswrapper[4409]: I1203 14:26:07.779459 4409 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 03 14:26:07.780062 master-0 kubenswrapper[4409]: I1203 14:26:07.780039 4409 server.go:1280] "Started kubelet" Dec 03 14:26:07.781763 master-0 kubenswrapper[4409]: I1203 14:26:07.780140 4409 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 03 14:26:07.781763 master-0 kubenswrapper[4409]: I1203 14:26:07.780311 4409 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 03 14:26:07.781763 master-0 kubenswrapper[4409]: I1203 14:26:07.780404 4409 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 03 14:26:07.781763 master-0 kubenswrapper[4409]: I1203 14:26:07.780951 4409 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 03 14:26:07.781763 master-0 kubenswrapper[4409]: I1203 14:26:07.781555 4409 server.go:449] "Adding debug handlers to kubelet server" Dec 03 14:26:07.784266 master-0 kubenswrapper[4409]: I1203 14:26:07.784057 4409 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 03 14:26:07.784266 master-0 kubenswrapper[4409]: I1203 14:26:07.784097 4409 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.784251 4409 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2025-12-04 13:37:22 +0000 UTC, rotation deadline is 2025-12-04 10:14:48.148505453 +0000 UTC Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.784289 4409 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h48m40.36421847s for next certificate rotation Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.784628 4409 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.784998 4409 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.785025 4409 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 03 14:26:07.785283 master-0 kubenswrapper[4409]: I1203 14:26:07.785069 4409 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Dec 03 14:26:07.784793 master-0 systemd[1]: Started Kubernetes Kubelet. Dec 03 14:26:07.792355 master-0 kubenswrapper[4409]: I1203 14:26:07.791477 4409 factory.go:55] Registering systemd factory Dec 03 14:26:07.792355 master-0 kubenswrapper[4409]: I1203 14:26:07.791549 4409 factory.go:221] Registration of the systemd container factory successfully Dec 03 14:26:07.792355 master-0 kubenswrapper[4409]: I1203 14:26:07.792082 4409 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.798222 master-0 kubenswrapper[4409]: I1203 14:26:07.797919 4409 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.798416 master-0 kubenswrapper[4409]: I1203 14:26:07.798287 4409 factory.go:153] Registering CRI-O factory Dec 03 14:26:07.798416 master-0 kubenswrapper[4409]: I1203 14:26:07.798313 4409 factory.go:221] Registration of the crio container factory successfully Dec 03 14:26:07.798416 master-0 kubenswrapper[4409]: I1203 14:26:07.798389 4409 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 03 14:26:07.798416 master-0 kubenswrapper[4409]: I1203 14:26:07.798411 4409 factory.go:103] Registering Raw factory Dec 03 14:26:07.798536 master-0 kubenswrapper[4409]: I1203 14:26:07.798428 4409 manager.go:1196] Started watching for new ooms in manager Dec 03 14:26:07.798887 master-0 kubenswrapper[4409]: I1203 14:26:07.798863 4409 manager.go:319] Starting recovery of all containers Dec 03 14:26:07.800524 master-0 kubenswrapper[4409]: E1203 14:26:07.800480 4409 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Dec 03 14:26:07.806940 master-0 kubenswrapper[4409]: I1203 14:26:07.806881 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.806940 master-0 kubenswrapper[4409]: I1203 14:26:07.806936 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.806947 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.806958 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.806979 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.806992 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807017 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807030 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807043 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807055 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807064 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807073 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807082 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz" seLinuxMountContext="" Dec 03 14:26:07.807095 master-0 kubenswrapper[4409]: I1203 14:26:07.807101 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807115 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807127 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807137 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807147 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42c95e54-b4ba-4b19-a97c-abcec840ac5d" volumeName="kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807155 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807165 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807196 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807207 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807219 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807231 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807243 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807255 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" volumeName="kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807278 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807292 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807313 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807326 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807338 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807351 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807364 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807384 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807413 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807426 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.807425 master-0 kubenswrapper[4409]: I1203 14:26:07.807440 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807453 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807466 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807477 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807494 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807507 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807521 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807533 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807547 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807559 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807571 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807584 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807596 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807609 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807620 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e0a2889-39a5-471e-bd46-958e2f8eacaa" volumeName="kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807633 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807649 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807661 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38888547-ed48-4f96-810d-bcd04e49bd6b" volumeName="kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807702 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807719 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807734 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807746 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807758 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807771 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807783 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807794 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807806 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807818 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807830 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807843 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807854 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807867 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807879 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807891 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807903 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807914 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807926 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807938 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f723d97-5c65-4ae7-9085-26db8b4f2f52" volumeName="kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807949 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807961 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807971 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807983 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.807994 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.808030 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.808044 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.808059 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert" seLinuxMountContext="" Dec 03 14:26:07.808035 master-0 kubenswrapper[4409]: I1203 14:26:07.808071 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808083 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808095 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808107 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808118 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808129 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808140 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808154 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808190 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808199 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808208 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808218 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808226 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808235 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808244 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808437 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808447 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e97e1725-cb55-4ce3-952d-a4fd0731577d" volumeName="kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808458 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808465 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808474 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808482 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808490 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808505 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808515 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808525 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808535 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808543 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808552 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ef37bba-85d9-4303-80c0-aac3dc49d3d9" volumeName="kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808562 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808571 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808581 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808591 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808599 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808607 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808617 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808652 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808661 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" volumeName="kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808670 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808678 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808686 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808693 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808701 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808708 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" volumeName="kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808717 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808724 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808733 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808743 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" volumeName="kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808753 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808764 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808775 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808786 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808797 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808807 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808817 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808825 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808832 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808840 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808848 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808858 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808868 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808879 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808890 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808901 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" volumeName="kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808913 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" volumeName="kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808924 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808934 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808945 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7d6a05e-beee-40e9-b376-5c22e285b27a" volumeName="kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808956 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808966 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert" seLinuxMountContext="" Dec 03 14:26:07.809329 master-0 kubenswrapper[4409]: I1203 14:26:07.808977 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl" seLinuxMountContext="" Dec 03 14:26:07.811636 master-0 kubenswrapper[4409]: I1203 14:26:07.810034 4409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 03 14:26:07.811765 master-0 kubenswrapper[4409]: I1203 14:26:07.811622 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client" seLinuxMountContext="" Dec 03 14:26:07.811804 master-0 kubenswrapper[4409]: I1203 14:26:07.811788 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8" seLinuxMountContext="" Dec 03 14:26:07.811922 master-0 kubenswrapper[4409]: I1203 14:26:07.811863 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.811922 master-0 kubenswrapper[4409]: I1203 14:26:07.811911 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs" seLinuxMountContext="" Dec 03 14:26:07.811986 master-0 kubenswrapper[4409]: I1203 14:26:07.811927 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles" seLinuxMountContext="" Dec 03 14:26:07.811986 master-0 kubenswrapper[4409]: I1203 14:26:07.811952 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j" seLinuxMountContext="" Dec 03 14:26:07.811986 master-0 kubenswrapper[4409]: I1203 14:26:07.811968 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp" seLinuxMountContext="" Dec 03 14:26:07.811986 master-0 kubenswrapper[4409]: I1203 14:26:07.811983 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4" seLinuxMountContext="" Dec 03 14:26:07.812114 master-0 kubenswrapper[4409]: I1203 14:26:07.812024 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login" seLinuxMountContext="" Dec 03 14:26:07.812114 master-0 kubenswrapper[4409]: I1203 14:26:07.812041 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.812173 master-0 kubenswrapper[4409]: I1203 14:26:07.812121 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r" seLinuxMountContext="" Dec 03 14:26:07.812228 master-0 kubenswrapper[4409]: I1203 14:26:07.812200 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.812268 master-0 kubenswrapper[4409]: I1203 14:26:07.812242 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:26:07.812302 master-0 kubenswrapper[4409]: I1203 14:26:07.812276 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config" seLinuxMountContext="" Dec 03 14:26:07.812302 master-0 kubenswrapper[4409]: I1203 14:26:07.812296 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.812359 master-0 kubenswrapper[4409]: I1203 14:26:07.812313 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out" seLinuxMountContext="" Dec 03 14:26:07.812466 master-0 kubenswrapper[4409]: I1203 14:26:07.812395 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config" seLinuxMountContext="" Dec 03 14:26:07.812504 master-0 kubenswrapper[4409]: I1203 14:26:07.812487 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f" seLinuxMountContext="" Dec 03 14:26:07.812533 master-0 kubenswrapper[4409]: I1203 14:26:07.812506 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.812533 master-0 kubenswrapper[4409]: I1203 14:26:07.812527 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.812591 master-0 kubenswrapper[4409]: I1203 14:26:07.812539 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.812591 master-0 kubenswrapper[4409]: I1203 14:26:07.812574 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3675c78-1902-4b92-8a93-cf2dc316f060" volumeName="kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert" seLinuxMountContext="" Dec 03 14:26:07.812591 master-0 kubenswrapper[4409]: I1203 14:26:07.812586 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca" seLinuxMountContext="" Dec 03 14:26:07.813865 master-0 kubenswrapper[4409]: I1203 14:26:07.813800 4409 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 03 14:26:07.813905 master-0 kubenswrapper[4409]: I1203 14:26:07.813880 4409 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 03 14:26:07.813933 master-0 kubenswrapper[4409]: I1203 14:26:07.813912 4409 kubelet.go:2335] "Starting kubelet main sync loop" Dec 03 14:26:07.813997 master-0 kubenswrapper[4409]: E1203 14:26:07.813961 4409 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 03 14:26:07.815885 master-0 kubenswrapper[4409]: I1203 14:26:07.815823 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815889 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815907 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815919 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="803897bb-580e-4f7a-9be2-583fc607d1f6" volumeName="kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815932 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ec89938d-35a5-46ba-8c63-12489db18cbd" volumeName="kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815944 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815958 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815970 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" volumeName="kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815985 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3eef3ef-f954-4e47-92b4-0155bc27332d" volumeName="kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2" seLinuxMountContext="" Dec 03 14:26:07.816000 master-0 kubenswrapper[4409]: I1203 14:26:07.815998 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816034 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816049 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816069 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816082 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816095 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816133 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816151 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816166 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816180 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816196 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816211 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a192c38a-4bfa-40fe-9a2d-d48260cf6443" volumeName="kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816225 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3c1ebb9-f052-410b-a999-45e9b75b0e58" volumeName="kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816239 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816254 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816269 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="911f6333-cdb0-425c-b79b-f892444b7097" volumeName="kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816282 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816307 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816325 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="33a557d1-cdd9-47ff-afbd-a301e7f589a7" volumeName="kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816342 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816355 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816368 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="98392f8e-0285-4bc3-95a9-d29033639ca3" volumeName="kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816385 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816397 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816421 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816437 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816453 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816470 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816486 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eecc43f5-708f-4395-98cc-696b243d6321" volumeName="kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816501 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="36da3c2f-860c-4188-a7d7-5b615981a835" volumeName="kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816518 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816534 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets" seLinuxMountContext="" Dec 03 14:26:07.816529 master-0 kubenswrapper[4409]: I1203 14:26:07.816549 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816577 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816594 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816612 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816627 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="da583723-b3ad-4a6f-b586-09b739bd7f8c" volumeName="kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816642 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816656 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55351b08-d46d-4327-aa5e-ae17fdffdfb5" volumeName="kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816669 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816682 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816696 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816710 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8dc6511-7339-4269-9d43-14ce53bb4e7f" volumeName="kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816723 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816736 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816748 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816763 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816776 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816789 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816801 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816814 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b681889-eb2c-41fb-a1dc-69b99227b45b" volumeName="kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816833 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816848 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04e9e2a5-cdc2-42af-ab2c-49525390be6d" volumeName="kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816861 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816875 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89bc996-818b-46b9-ad39-a12457acd4bb" volumeName="kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816889 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" volumeName="kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816907 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816915 4409 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.816920 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817041 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817119 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817137 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="918ff36b-662f-46ae-b71a-301df7e67735" volumeName="kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817152 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06d774e5-314a-49df-bdca-8e780c9af25a" volumeName="kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817224 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" volumeName="kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817241 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817253 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b340553b-d483-4839-8328-518f27770832" volumeName="kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817267 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="38888547-ed48-4f96-810d-bcd04e49bd6b" volumeName="kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817280 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" volumeName="kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817319 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817333 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817346 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15782f65-35d2-4e95-bf49-81541c683ffe" volumeName="kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817360 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="829d285f-d532-45e4-b1ec-54adbc21b9f9" volumeName="kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817373 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7663a25e-236d-4b1d-83ce-733ab146dee3" volumeName="kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817387 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="82bd0ae5-b35d-47c8-b693-b27a9a56476d" volumeName="kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817402 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c6fa89f-268c-477b-9f04-238d2305cc89" volumeName="kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817419 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" volumeName="kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817435 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="faa79e15-1875-4865-b5e0-aecd4c447bad" volumeName="kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817447 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817458 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817474 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:26:07.817437 master-0 kubenswrapper[4409]: I1203 14:26:07.817486 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69b752ed-691c-4574-a01e-428d4bf85b75" volumeName="kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817498 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817510 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" volumeName="kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817521 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817531 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817542 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817552 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="52100521-67e9-40c9-887c-eda6560f06e0" volumeName="kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817564 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817574 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a969ddd4-e20d-4dd2-84f4-a140bac65df0" volumeName="kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817585 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817616 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" volumeName="kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817630 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" volumeName="kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817643 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bcc78129-4a81-410e-9a42-b12043b5a75a" volumeName="kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817654 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817665 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8eee1d96-2f58-41a6-ae51-c158b29fc813" volumeName="kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817699 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="24dfafc9-86a9-450e-ac62-a871138106c0" volumeName="kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817711 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77430348-b53a-4898-8047-be8bb542a0a7" volumeName="kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817723 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b02244d0-f4ef-4702-950d-9e3fb5ced128" volumeName="kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817735 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0535e784-8e28-4090-aa2e-df937910767c" volumeName="kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817746 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" volumeName="kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817777 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b95a5a6-db93-4a58-aaff-3619d130c8cb" volumeName="kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817789 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74e39dce-29d5-4b2a-ab19-386b6cdae94d" volumeName="kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817801 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b442f72-b113-4227-93b5-ea1ae90d5154" volumeName="kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817812 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b051ae27-7879-448d-b426-4dce76e29739" volumeName="kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817822 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" volumeName="kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817852 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817865 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6935a3f8-723e-46e6-8498-483f34bf0825" volumeName="kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817876 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="19c2a40b-213c-42f1-9459-87c2e780a75f" volumeName="kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817888 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44af6af5-cecb-4dc4-b793-e8e350f8a47d" volumeName="kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817899 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4669137a-fbc4-41e1-8eeb-5f06b9da2641" volumeName="kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817930 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817941 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1b3ab29-77cf-48ac-8881-846c46bb9048" volumeName="kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817953 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c180b512-bf0c-4ddc-a5cf-f04acc830a61" volumeName="kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817963 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817974 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" volumeName="kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.817985 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818157 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eefee934-ac6b-44e3-a6be-1ae62362ab4f" volumeName="kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818168 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818178 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="690d1f81-7b1f-4fd0-9b6e-154c9687c744" volumeName="kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818190 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818202 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c95705e3-17ef-40fe-89e8-22586a32621b" volumeName="kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818213 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56649bd4-ac30-4a70-8024-772294fede88" volumeName="kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818431 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d838c1a-22e2-4096-9739-7841ef7d06ba" volumeName="kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818448 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" volumeName="kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818465 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1b3ab29-77cf-48ac-8881-846c46bb9048" volumeName="kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818476 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c562495-1290-4792-b4b2-639faa594ae2" volumeName="kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818488 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8a12409a-0be3-4023-9df3-a0f091aac8dc" volumeName="kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818499 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bff18a80-0b0f-40ab-862e-e8b1ab32040a" volumeName="kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818510 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c777c9de-1ace-46be-b5c2-c71d252f53f4" volumeName="kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818522 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9f484c1-1564-49c7-a43d-bd8b971cea20" volumeName="kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818534 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4df2889c-99f7-402a-9d50-18ccf427179c" volumeName="kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818785 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="adbcce01-7282-4a75-843a-9623060346f0" volumeName="kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818799 4409 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" volumeName="kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4" seLinuxMountContext="" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818810 4409 reconstruct.go:97] "Volume reconstruction finished" Dec 03 14:26:07.818974 master-0 kubenswrapper[4409]: I1203 14:26:07.818817 4409 reconciler.go:26] "Reconciler: start to sync state" Dec 03 14:26:07.823234 master-0 kubenswrapper[4409]: I1203 14:26:07.823179 4409 generic.go:334] "Generic (PLEG): container finished" podID="b495b0c38f2c54e7cc46282c5f92aab5" containerID="a80929b981b600c3956c93ee3dcc81c8d5ba604e4e6c208f8e3ae77bdf73736d" exitCode=0 Dec 03 14:26:07.827407 master-0 kubenswrapper[4409]: I1203 14:26:07.827362 4409 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="6eae176e0ae8fefc0bdfeffe3c926861eedc7d77177bb0a1c542bb03d7b718af" exitCode=0 Dec 03 14:26:07.832286 master-0 kubenswrapper[4409]: I1203 14:26:07.832230 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="e996313978fa83ba3f4b88159399d4394708e2235d676a3ec4d90f36c6ebdd4f" exitCode=0 Dec 03 14:26:07.846247 master-0 kubenswrapper[4409]: I1203 14:26:07.846195 4409 generic.go:334] "Generic (PLEG): container finished" podID="b71ac8a5-987d-4eba-8bc0-a091f0a0de16" containerID="bd245e8e5a862c9ab237c131c5a86ef8f53726005c94dae01555c97627b35f8a" exitCode=0 Dec 03 14:26:07.850655 master-0 kubenswrapper[4409]: I1203 14:26:07.850611 4409 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb" exitCode=0 Dec 03 14:26:07.850783 master-0 kubenswrapper[4409]: I1203 14:26:07.850686 4409 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7" exitCode=0 Dec 03 14:26:07.850783 master-0 kubenswrapper[4409]: I1203 14:26:07.850706 4409 generic.go:334] "Generic (PLEG): container finished" podID="4dd8b778e190b1975a0a8fad534da6dd" containerID="dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0" exitCode=0 Dec 03 14:26:07.853472 master-0 kubenswrapper[4409]: I1203 14:26:07.853429 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d" exitCode=0 Dec 03 14:26:07.914251 master-0 kubenswrapper[4409]: E1203 14:26:07.914209 4409 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 03 14:26:07.944945 master-0 kubenswrapper[4409]: I1203 14:26:07.942303 4409 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 03 14:26:07.967104 master-0 kubenswrapper[4409]: I1203 14:26:07.966935 4409 manager.go:324] Recovery completed Dec 03 14:26:08.002171 master-0 kubenswrapper[4409]: I1203 14:26:08.002138 4409 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 03 14:26:08.002171 master-0 kubenswrapper[4409]: I1203 14:26:08.002159 4409 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 03 14:26:08.002171 master-0 kubenswrapper[4409]: I1203 14:26:08.002179 4409 state_mem.go:36] "Initialized new in-memory state store" Dec 03 14:26:08.002450 master-0 kubenswrapper[4409]: I1203 14:26:08.002342 4409 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 03 14:26:08.002450 master-0 kubenswrapper[4409]: I1203 14:26:08.002353 4409 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 03 14:26:08.002450 master-0 kubenswrapper[4409]: I1203 14:26:08.002402 4409 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Dec 03 14:26:08.002450 master-0 kubenswrapper[4409]: I1203 14:26:08.002408 4409 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Dec 03 14:26:08.002450 master-0 kubenswrapper[4409]: I1203 14:26:08.002414 4409 policy_none.go:49] "None policy: Start" Dec 03 14:26:08.006907 master-0 kubenswrapper[4409]: I1203 14:26:08.004196 4409 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 03 14:26:08.006907 master-0 kubenswrapper[4409]: I1203 14:26:08.004221 4409 state_mem.go:35] "Initializing new in-memory state store" Dec 03 14:26:08.006907 master-0 kubenswrapper[4409]: I1203 14:26:08.004381 4409 state_mem.go:75] "Updated machine memory state" Dec 03 14:26:08.006907 master-0 kubenswrapper[4409]: I1203 14:26:08.004389 4409 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Dec 03 14:26:08.017801 master-0 kubenswrapper[4409]: I1203 14:26:08.017759 4409 manager.go:334] "Starting Device Plugin manager" Dec 03 14:26:08.018021 master-0 kubenswrapper[4409]: I1203 14:26:08.017978 4409 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 03 14:26:08.018077 master-0 kubenswrapper[4409]: I1203 14:26:08.018022 4409 server.go:79] "Starting device plugin registration server" Dec 03 14:26:08.018532 master-0 kubenswrapper[4409]: I1203 14:26:08.018502 4409 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 03 14:26:08.018584 master-0 kubenswrapper[4409]: I1203 14:26:08.018530 4409 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 03 14:26:08.018827 master-0 kubenswrapper[4409]: I1203 14:26:08.018782 4409 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 03 14:26:08.018902 master-0 kubenswrapper[4409]: I1203 14:26:08.018881 4409 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 03 14:26:08.018902 master-0 kubenswrapper[4409]: I1203 14:26:08.018896 4409 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 03 14:26:08.019690 master-0 kubenswrapper[4409]: E1203 14:26:08.019649 4409 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:08.115809 master-0 kubenswrapper[4409]: I1203 14:26:08.115349 4409 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Dec 03 14:26:08.117087 master-0 kubenswrapper[4409]: I1203 14:26:08.116990 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerDied","Data":"a80929b981b600c3956c93ee3dcc81c8d5ba604e4e6c208f8e3ae77bdf73736d"} Dec 03 14:26:08.117182 master-0 kubenswrapper[4409]: I1203 14:26:08.117090 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b495b0c38f2c54e7cc46282c5f92aab5","Type":"ContainerStarted","Data":"8fb8e7d592ee5f7b8ec5be92e046002cd51c8a87a167b750d4810047ffdc241c"} Dec 03 14:26:08.117182 master-0 kubenswrapper[4409]: I1203 14:26:08.117110 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"05a2610f6bca4defc9b7ede8255a1c063ebe53f7d07ab7227fcf2edbc056b331"} Dec 03 14:26:08.117182 master-0 kubenswrapper[4409]: I1203 14:26:08.117123 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"2d61d8802bbc570d04dd9977fb07dd6294b8212bfe0e7176af3f6ce67f85ee5a"} Dec 03 14:26:08.117182 master-0 kubenswrapper[4409]: I1203 14:26:08.117136 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"d0a827a444c38d75c515a416cb1a917a642fb70a7523efb53087345e0bb3e2ee"} Dec 03 14:26:08.117182 master-0 kubenswrapper[4409]: I1203 14:26:08.117183 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerDied","Data":"6eae176e0ae8fefc0bdfeffe3c926861eedc7d77177bb0a1c542bb03d7b718af"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117205 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"fd2fa610bb2a39c39fcdd00db03a511a","Type":"ContainerStarted","Data":"40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117339 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"273350e7b0aeceae0168f90588eb07e0ee52a413f6434e0abfb74158cc482c9d"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117359 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"e40eeccb22154afc36511e259a0bbd0340bbb8c152ccc392f07b9b63e9286432"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117370 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"be55425be92502579ba54e0a7029374fa5869946a681a8d47fee9f3e2abb52ad"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117381 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"d7424a3adff7dce95e229689db3a097554825a0a1b6fc1da3f511760d76ff1a4"} Dec 03 14:26:08.117395 master-0 kubenswrapper[4409]: I1203 14:26:08.117392 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"2eb83f75a316413d7cd4039c1ecf1652c36407775bf11a763ce99c299576a480"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117406 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"2a7b80a876ff19badb393fe51e758bf7d41d437058e661f067ba45094dbb77bb"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117422 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"1c78909007996499471b7050ddc621df6e6e5371bac4e1a9e761d0aa25fda8a7"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117434 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerDied","Data":"dd09bbb6dabb6628edc9177b7dedd0208724a221e8229f867a98fb2ad0fb4bd0"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117445 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"4dd8b778e190b1975a0a8fad534da6dd","Type":"ContainerStarted","Data":"fe10c8571743ae8c18306344aa11beaf8c528d84ee560aab6bce934dc7552516"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117456 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117469 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117481 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117494 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117506 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117518 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerDied","Data":"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117532 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"8a00233b22d19df39b2e1c8ba133b3c2","Type":"ContainerStarted","Data":"b1685b8182bda49d4cb70217ebd8d9b38aed1b64a62ad1b32186f7a57cd3fcd1"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117545 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117558 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117572 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117587 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636"} Dec 03 14:26:08.117641 master-0 kubenswrapper[4409]: I1203 14:26:08.117599 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095"} Dec 03 14:26:08.118773 master-0 kubenswrapper[4409]: I1203 14:26:08.118709 4409 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 03 14:26:08.121980 master-0 kubenswrapper[4409]: I1203 14:26:08.121920 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Dec 03 14:26:08.121980 master-0 kubenswrapper[4409]: I1203 14:26:08.121962 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Dec 03 14:26:08.121980 master-0 kubenswrapper[4409]: I1203 14:26:08.121977 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Dec 03 14:26:08.122258 master-0 kubenswrapper[4409]: I1203 14:26:08.122142 4409 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Dec 03 14:26:08.239749 master-0 kubenswrapper[4409]: E1203 14:26:08.239402 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.243971 master-0 kubenswrapper[4409]: I1203 14:26:08.243940 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.244153 master-0 kubenswrapper[4409]: I1203 14:26:08.244125 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.244264 master-0 kubenswrapper[4409]: I1203 14:26:08.244250 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.244350 master-0 kubenswrapper[4409]: I1203 14:26:08.244337 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.244424 master-0 kubenswrapper[4409]: I1203 14:26:08.244413 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.244503 master-0 kubenswrapper[4409]: I1203 14:26:08.244490 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.244580 master-0 kubenswrapper[4409]: I1203 14:26:08.244568 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.244667 master-0 kubenswrapper[4409]: I1203 14:26:08.244654 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.244738 master-0 kubenswrapper[4409]: I1203 14:26:08.244727 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.244815 master-0 kubenswrapper[4409]: I1203 14:26:08.244804 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.244888 master-0 kubenswrapper[4409]: I1203 14:26:08.244877 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.244957 master-0 kubenswrapper[4409]: I1203 14:26:08.244946 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.245042 master-0 kubenswrapper[4409]: I1203 14:26:08.245031 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.245115 master-0 kubenswrapper[4409]: I1203 14:26:08.245104 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.245194 master-0 kubenswrapper[4409]: I1203 14:26:08.245182 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.346461 master-0 kubenswrapper[4409]: I1203 14:26:08.346423 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.346612 master-0 kubenswrapper[4409]: I1203 14:26:08.346592 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.346741 master-0 kubenswrapper[4409]: I1203 14:26:08.346723 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.346865 master-0 kubenswrapper[4409]: I1203 14:26:08.346847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.346973 master-0 kubenswrapper[4409]: I1203 14:26:08.346959 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.347112 master-0 kubenswrapper[4409]: I1203 14:26:08.347055 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.347187 master-0 kubenswrapper[4409]: I1203 14:26:08.346842 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.347187 master-0 kubenswrapper[4409]: I1203 14:26:08.346639 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-log-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347187 master-0 kubenswrapper[4409]: I1203 14:26:08.346591 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-resource-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347187 master-0 kubenswrapper[4409]: I1203 14:26:08.347070 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347227 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347256 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347279 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.346878 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347300 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347322 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.347344 master-0 kubenswrapper[4409]: I1203 14:26:08.347345 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347370 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347376 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-cert-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347379 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347425 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347395 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347377 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b495b0c38f2c54e7cc46282c5f92aab5-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b495b0c38f2c54e7cc46282c5f92aab5\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347418 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347420 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347436 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347486 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347344 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-data-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.347632 master-0 kubenswrapper[4409]: I1203 14:26:08.347579 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-static-pod-dir\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.348128 master-0 kubenswrapper[4409]: I1203 14:26:08.347101 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/4dd8b778e190b1975a0a8fad534da6dd-usr-local-bin\") pod \"etcd-master-0\" (UID: \"4dd8b778e190b1975a0a8fad534da6dd\") " pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.526175 master-0 kubenswrapper[4409]: I1203 14:26:08.526002 4409 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Dec 03 14:26:08.526461 master-0 kubenswrapper[4409]: I1203 14:26:08.526430 4409 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Dec 03 14:26:08.526881 master-0 kubenswrapper[4409]: E1203 14:26:08.526846 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:08.527457 master-0 kubenswrapper[4409]: I1203 14:26:08.527437 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:08.527530 master-0 kubenswrapper[4409]: I1203 14:26:08.527459 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:08Z","lastTransitionTime":"2025-12-03T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:08.527735 master-0 kubenswrapper[4409]: E1203 14:26:08.527698 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:08.527735 master-0 kubenswrapper[4409]: E1203 14:26:08.527724 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:08.528294 master-0 kubenswrapper[4409]: E1203 14:26:08.528226 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:08.615572 master-0 kubenswrapper[4409]: E1203 14:26:08.615510 4409 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:08.618913 master-0 kubenswrapper[4409]: I1203 14:26:08.618884 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:08.619056 master-0 kubenswrapper[4409]: I1203 14:26:08.618921 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:08Z","lastTransitionTime":"2025-12-03T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:08.686488 master-0 kubenswrapper[4409]: E1203 14:26:08.686397 4409 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:08.690915 master-0 kubenswrapper[4409]: I1203 14:26:08.690866 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:08.691154 master-0 kubenswrapper[4409]: I1203 14:26:08.690896 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:08Z","lastTransitionTime":"2025-12-03T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:08.716073 master-0 kubenswrapper[4409]: E1203 14:26:08.716001 4409 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:08.720039 master-0 kubenswrapper[4409]: I1203 14:26:08.719980 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:08.720191 master-0 kubenswrapper[4409]: I1203 14:26:08.720052 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:08Z","lastTransitionTime":"2025-12-03T14:26:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:08.735409 master-0 kubenswrapper[4409]: E1203 14:26:08.735337 4409 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:08.773724 master-0 kubenswrapper[4409]: I1203 14:26:08.773620 4409 apiserver.go:52] "Watching apiserver" Dec 03 14:26:08.800794 master-0 kubenswrapper[4409]: I1203 14:26:08.800581 4409 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:26:08.803303 master-0 kubenswrapper[4409]: I1203 14:26:08.803217 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm","openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw","openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg","openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz","openshift-kube-controller-manager/installer-3-master-0","openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml","openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p","openshift-ovn-kubernetes/ovnkube-node-txl6b","openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74","openshift-ingress/router-default-54f97f57-rr9px","openshift-machine-config-operator/machine-config-daemon-2ztl9","openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2","openshift-monitoring/prometheus-k8s-0","openshift-apiserver/apiserver-6985f84b49-v9vlg","openshift-kube-apiserver/installer-1-master-0","openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k","openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv","openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p","openshift-etcd/installer-1-master-0","openshift-etcd/installer-2-retry-1-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-machine-config-operator/machine-config-server-pvrfs","openshift-marketplace/certified-operators-t8rt7","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm","openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n","openshift-catalogd/catalogd-controller-manager-754cfd84-qf898","openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w","openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg","openshift-multus/multus-kk4tm","openshift-kube-scheduler/installer-4-master-0","openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r","openshift-monitoring/alertmanager-main-0","openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8","openshift-kube-apiserver/kube-apiserver-master-0","openshift-network-diagnostics/network-check-target-pcchm","openshift-authentication/oauth-openshift-747bdb58b5-mn76f","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq","openshift-console-operator/console-operator-77df56447c-vsrxx","openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4","openshift-monitoring/prometheus-operator-565bdcb8-477pk","openshift-monitoring/telemeter-client-764cbf5554-kftwv","assisted-installer/assisted-installer-controller-stq5g","openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w","openshift-console/console-6c9c84854-xf7nv","openshift-ingress-canary/ingress-canary-vkpv4","openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg","openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7","openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm","openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29","openshift-dns/node-resolver-4xlhs","openshift-network-operator/network-operator-6cbf58c977-8lh6n","openshift-kube-scheduler/installer-6-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-marketplace/community-operators-7fwtv","openshift-marketplace/redhat-operators-6z4sc","openshift-etcd/etcd-master-0","openshift-kube-apiserver/installer-5-master-0","openshift-monitoring/node-exporter-b62gf","openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j","openshift-network-console/networking-console-plugin-7c696657b7-452tx","openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/thanos-querier-cc996c4bd-j4hzr","openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5","openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8","openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96","openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2","openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx","openshift-kube-apiserver/installer-4-master-0","openshift-multus/multus-admission-controller-84c998f64f-8stq7","openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl","openshift-dns/dns-default-5m4f8","openshift-network-operator/iptables-alerter-n24qb","openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6","openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr","openshift-service-ca/service-ca-6b8bb995f7-b68p8","openshift-cluster-node-tuning-operator/tuned-7zkbg","openshift-marketplace/redhat-marketplace-ddwmn","openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg","openshift-console/downloads-6f5db8559b-96ljh","openshift-insights/insights-operator-59d99f9b7b-74sss","openshift-multus/network-metrics-daemon-ch7xd","openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l","openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h","openshift-image-registry/node-ca-4p4zh","openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5","openshift-kube-apiserver/installer-2-master-0","openshift-network-node-identity/network-node-identity-c8csx","openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz","openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-monitoring/metrics-server-555496955b-vpcbs","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz","openshift-machine-api/machine-api-operator-7486ff55f-wcnxg","openshift-multus/multus-additional-cni-plugins-42hmk","openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/installer-5-master-0","openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn","openshift-etcd-operator/etcd-operator-7978bf889c-n64v4","openshift-kube-apiserver/installer-6-master-0","openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb","openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql"] Dec 03 14:26:08.803570 master-0 kubenswrapper[4409]: I1203 14:26:08.803532 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-stq5g" Dec 03 14:26:08.803666 master-0 kubenswrapper[4409]: I1203 14:26:08.803637 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:08.803745 master-0 kubenswrapper[4409]: E1203 14:26:08.803723 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:08.803812 master-0 kubenswrapper[4409]: I1203 14:26:08.803726 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:08.803812 master-0 kubenswrapper[4409]: E1203 14:26:08.803788 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:08.803939 master-0 kubenswrapper[4409]: I1203 14:26:08.803928 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:08.804040 master-0 kubenswrapper[4409]: E1203 14:26:08.803964 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:08.804144 master-0 kubenswrapper[4409]: I1203 14:26:08.804112 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:08.804226 master-0 kubenswrapper[4409]: E1203 14:26:08.804158 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:08.804226 master-0 kubenswrapper[4409]: I1203 14:26:08.804201 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.804377 master-0 kubenswrapper[4409]: I1203 14:26:08.804242 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:08.804377 master-0 kubenswrapper[4409]: E1203 14:26:08.804290 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:08.804377 master-0 kubenswrapper[4409]: E1203 14:26:08.804281 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:08.804658 master-0 kubenswrapper[4409]: I1203 14:26:08.804456 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:08.804658 master-0 kubenswrapper[4409]: E1203 14:26:08.804508 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:08.804658 master-0 kubenswrapper[4409]: I1203 14:26:08.804587 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:08.804658 master-0 kubenswrapper[4409]: E1203 14:26:08.804628 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:08.804658 master-0 kubenswrapper[4409]: I1203 14:26:08.804650 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: I1203 14:26:08.804661 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: I1203 14:26:08.804888 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: E1203 14:26:08.804887 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: I1203 14:26:08.804905 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: E1203 14:26:08.804964 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: E1203 14:26:08.805048 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:08.805119 master-0 kubenswrapper[4409]: E1203 14:26:08.805100 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: I1203 14:26:08.805365 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: E1203 14:26:08.805412 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: I1203 14:26:08.805449 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: I1203 14:26:08.805458 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: E1203 14:26:08.805506 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:08.805607 master-0 kubenswrapper[4409]: E1203 14:26:08.805547 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: I1203 14:26:08.805684 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: E1203 14:26:08.805723 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: I1203 14:26:08.805812 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: E1203 14:26:08.805856 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: I1203 14:26:08.805968 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:08.805994 master-0 kubenswrapper[4409]: E1203 14:26:08.806026 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:08.807121 master-0 kubenswrapper[4409]: I1203 14:26:08.807065 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:08.807121 master-0 kubenswrapper[4409]: I1203 14:26:08.807083 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:08.807391 master-0 kubenswrapper[4409]: I1203 14:26:08.807242 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:08.807391 master-0 kubenswrapper[4409]: E1203 14:26:08.807302 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:08.808289 master-0 kubenswrapper[4409]: E1203 14:26:08.807567 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:08.808289 master-0 kubenswrapper[4409]: E1203 14:26:08.807613 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:08.808289 master-0 kubenswrapper[4409]: I1203 14:26:08.808280 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:08.808781 master-0 kubenswrapper[4409]: E1203 14:26:08.808719 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:08.809425 master-0 kubenswrapper[4409]: I1203 14:26:08.809161 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:08.809425 master-0 kubenswrapper[4409]: I1203 14:26:08.809372 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:08.809579 master-0 kubenswrapper[4409]: E1203 14:26:08.809465 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:08.809647 master-0 kubenswrapper[4409]: I1203 14:26:08.809619 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Dec 03 14:26:08.810441 master-0 kubenswrapper[4409]: I1203 14:26:08.809993 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.810441 master-0 kubenswrapper[4409]: E1203 14:26:08.810119 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:08.810441 master-0 kubenswrapper[4409]: E1203 14:26:08.810313 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:08.810681 master-0 kubenswrapper[4409]: I1203 14:26:08.810551 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.810681 master-0 kubenswrapper[4409]: I1203 14:26:08.810638 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:26:08.810782 master-0 kubenswrapper[4409]: I1203 14:26:08.810682 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.810782 master-0 kubenswrapper[4409]: I1203 14:26:08.810679 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:08.810782 master-0 kubenswrapper[4409]: I1203 14:26:08.810742 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:26:08.810901 master-0 kubenswrapper[4409]: I1203 14:26:08.810809 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:26:08.810967 master-0 kubenswrapper[4409]: I1203 14:26:08.810911 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:26:08.810967 master-0 kubenswrapper[4409]: E1203 14:26:08.810949 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:08.811119 master-0 kubenswrapper[4409]: I1203 14:26:08.810965 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:26:08.811119 master-0 kubenswrapper[4409]: I1203 14:26:08.811068 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:26:08.811233 master-0 kubenswrapper[4409]: I1203 14:26:08.811130 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:26:08.811233 master-0 kubenswrapper[4409]: I1203 14:26:08.811148 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:26:08.811233 master-0 kubenswrapper[4409]: I1203 14:26:08.810971 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:26:08.811349 master-0 kubenswrapper[4409]: I1203 14:26:08.811230 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:26:08.811349 master-0 kubenswrapper[4409]: I1203 14:26:08.810928 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:26:08.811349 master-0 kubenswrapper[4409]: E1203 14:26:08.811327 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:08.811583 master-0 kubenswrapper[4409]: E1203 14:26:08.811136 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:08.811583 master-0 kubenswrapper[4409]: I1203 14:26:08.811407 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:26:08.811583 master-0 kubenswrapper[4409]: I1203 14:26:08.811415 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Dec 03 14:26:08.813797 master-0 kubenswrapper[4409]: I1203 14:26:08.813755 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.813919 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.813939 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.813949 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814156 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814201 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814267 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814491 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814547 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814590 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: E1203 14:26:08.814642 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:08.814975 master-0 kubenswrapper[4409]: I1203 14:26:08.814729 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:26:08.815479 master-0 kubenswrapper[4409]: I1203 14:26:08.815295 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:26:08.815834 master-0 kubenswrapper[4409]: I1203 14:26:08.815698 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Dec 03 14:26:08.815834 master-0 kubenswrapper[4409]: I1203 14:26:08.815724 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:08.816080 master-0 kubenswrapper[4409]: E1203 14:26:08.815836 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:08.816691 master-0 kubenswrapper[4409]: I1203 14:26:08.816311 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:08.816691 master-0 kubenswrapper[4409]: E1203 14:26:08.816371 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:08.816691 master-0 kubenswrapper[4409]: I1203 14:26:08.816480 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:26:08.816933 master-0 kubenswrapper[4409]: I1203 14:26:08.816881 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:08.816998 master-0 kubenswrapper[4409]: E1203 14:26:08.816968 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:08.817101 master-0 kubenswrapper[4409]: I1203 14:26:08.817063 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:08.817163 master-0 kubenswrapper[4409]: E1203 14:26:08.817105 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:08.819271 master-0 kubenswrapper[4409]: I1203 14:26:08.818822 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:08.819271 master-0 kubenswrapper[4409]: E1203 14:26:08.818917 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:08.819271 master-0 kubenswrapper[4409]: I1203 14:26:08.818974 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:08.819271 master-0 kubenswrapper[4409]: E1203 14:26:08.819057 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:08.820473 master-0 kubenswrapper[4409]: I1203 14:26:08.820312 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.820473 master-0 kubenswrapper[4409]: E1203 14:26:08.820384 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:08.822037 master-0 kubenswrapper[4409]: I1203 14:26:08.821627 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.822037 master-0 kubenswrapper[4409]: E1203 14:26:08.821910 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822125 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822274 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822336 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822347 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822349 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822350 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822428 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822449 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: E1203 14:26:08.822474 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822578 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822600 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:26:08.822895 master-0 kubenswrapper[4409]: I1203 14:26:08.822720 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:26:08.823243 master-0 kubenswrapper[4409]: I1203 14:26:08.822915 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:08.823392 master-0 kubenswrapper[4409]: I1203 14:26:08.823346 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:08.823497 master-0 kubenswrapper[4409]: E1203 14:26:08.823440 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:08.823497 master-0 kubenswrapper[4409]: I1203 14:26:08.823453 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.823562 master-0 kubenswrapper[4409]: E1203 14:26:08.823541 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:08.823600 master-0 kubenswrapper[4409]: I1203 14:26:08.823558 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:26:08.823711 master-0 kubenswrapper[4409]: I1203 14:26:08.823662 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Dec 03 14:26:08.823745 master-0 kubenswrapper[4409]: I1203 14:26:08.823715 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:26:08.823886 master-0 kubenswrapper[4409]: I1203 14:26:08.823236 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: I1203 14:26:08.823880 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: I1203 14:26:08.823897 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: I1203 14:26:08.823967 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: E1203 14:26:08.823987 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: E1203 14:26:08.823448 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: E1203 14:26:08.824022 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:08.824087 master-0 kubenswrapper[4409]: E1203 14:26:08.824051 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:08.826640 master-0 kubenswrapper[4409]: I1203 14:26:08.826424 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: I1203 14:26:08.826684 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: I1203 14:26:08.826691 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: I1203 14:26:08.826722 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: E1203 14:26:08.826770 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: I1203 14:26:08.826783 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.826833 master-0 kubenswrapper[4409]: I1203 14:26:08.826801 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: E1203 14:26:08.826880 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: E1203 14:26:08.827046 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: I1203 14:26:08.827260 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: I1203 14:26:08.827265 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: I1203 14:26:08.827468 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: E1203 14:26:08.827507 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: E1203 14:26:08.827456 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: I1203 14:26:08.827606 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:26:08.827754 master-0 kubenswrapper[4409]: I1203 14:26:08.827723 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:08.828194 master-0 kubenswrapper[4409]: E1203 14:26:08.827782 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:08.828247 master-0 kubenswrapper[4409]: I1203 14:26:08.828219 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:26:08.828790 master-0 kubenswrapper[4409]: I1203 14:26:08.828451 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:26:08.828790 master-0 kubenswrapper[4409]: I1203 14:26:08.828522 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:08.828790 master-0 kubenswrapper[4409]: E1203 14:26:08.828584 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:08.828790 master-0 kubenswrapper[4409]: I1203 14:26:08.828460 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:08.828790 master-0 kubenswrapper[4409]: E1203 14:26:08.828681 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:08.829442 master-0 kubenswrapper[4409]: I1203 14:26:08.829413 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:26:08.829575 master-0 kubenswrapper[4409]: I1203 14:26:08.829533 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:26:08.829575 master-0 kubenswrapper[4409]: I1203 14:26:08.829546 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:26:08.829691 master-0 kubenswrapper[4409]: I1203 14:26:08.829538 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.829691 master-0 kubenswrapper[4409]: I1203 14:26:08.829658 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:26:08.829767 master-0 kubenswrapper[4409]: E1203 14:26:08.829708 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:08.829810 master-0 kubenswrapper[4409]: I1203 14:26:08.829768 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:26:08.829842 master-0 kubenswrapper[4409]: I1203 14:26:08.829808 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:26:08.829871 master-0 kubenswrapper[4409]: I1203 14:26:08.829861 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:26:08.829904 master-0 kubenswrapper[4409]: I1203 14:26:08.829870 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:26:08.829934 master-0 kubenswrapper[4409]: I1203 14:26:08.829926 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:26:08.830400 master-0 kubenswrapper[4409]: I1203 14:26:08.830177 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:26:08.830400 master-0 kubenswrapper[4409]: I1203 14:26:08.830224 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.830400 master-0 kubenswrapper[4409]: E1203 14:26:08.830322 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:08.830569 master-0 kubenswrapper[4409]: I1203 14:26:08.830542 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:26:08.831499 master-0 kubenswrapper[4409]: I1203 14:26:08.830666 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:26:08.831499 master-0 kubenswrapper[4409]: I1203 14:26:08.831495 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: E1203 14:26:08.831578 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: I1203 14:26:08.831593 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: E1203 14:26:08.831682 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: I1203 14:26:08.831719 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: I1203 14:26:08.831725 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: I1203 14:26:08.831764 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: E1203 14:26:08.831768 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: I1203 14:26:08.831962 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.832035 master-0 kubenswrapper[4409]: E1203 14:26:08.831997 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:08.832417 master-0 kubenswrapper[4409]: I1203 14:26:08.832143 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:26:08.832417 master-0 kubenswrapper[4409]: I1203 14:26:08.832259 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:26:08.832417 master-0 kubenswrapper[4409]: I1203 14:26:08.832385 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:26:08.832592 master-0 kubenswrapper[4409]: I1203 14:26:08.832552 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:26:08.832764 master-0 kubenswrapper[4409]: I1203 14:26:08.832735 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.832818 master-0 kubenswrapper[4409]: E1203 14:26:08.832780 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:08.832989 master-0 kubenswrapper[4409]: I1203 14:26:08.832964 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:08.833073 master-0 kubenswrapper[4409]: E1203 14:26:08.833020 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:08.833245 master-0 kubenswrapper[4409]: I1203 14:26:08.833220 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:08.833299 master-0 kubenswrapper[4409]: E1203 14:26:08.833260 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:08.833638 master-0 kubenswrapper[4409]: I1203 14:26:08.833615 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:26:08.833701 master-0 kubenswrapper[4409]: I1203 14:26:08.833651 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.833806 master-0 kubenswrapper[4409]: E1203 14:26:08.833757 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:08.834124 master-0 kubenswrapper[4409]: I1203 14:26:08.834098 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:26:08.834260 master-0 kubenswrapper[4409]: I1203 14:26:08.834247 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Dec 03 14:26:08.834385 master-0 kubenswrapper[4409]: I1203 14:26:08.834347 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:08.834606 master-0 kubenswrapper[4409]: I1203 14:26:08.834516 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Dec 03 14:26:08.834606 master-0 kubenswrapper[4409]: I1203 14:26:08.834573 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Dec 03 14:26:08.834606 master-0 kubenswrapper[4409]: I1203 14:26:08.834577 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Dec 03 14:26:08.834767 master-0 kubenswrapper[4409]: I1203 14:26:08.834611 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412855-jmbvv" Dec 03 14:26:08.834767 master-0 kubenswrapper[4409]: I1203 14:26:08.834621 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Dec 03 14:26:08.834767 master-0 kubenswrapper[4409]: I1203 14:26:08.834695 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:26:08.834950 master-0 kubenswrapper[4409]: I1203 14:26:08.834788 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:26:08.834950 master-0 kubenswrapper[4409]: E1203 14:26:08.834801 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:08.834950 master-0 kubenswrapper[4409]: I1203 14:26:08.834932 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.835100 master-0 kubenswrapper[4409]: E1203 14:26:08.835030 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:08.835796 master-0 kubenswrapper[4409]: I1203 14:26:08.835764 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.835849 master-0 kubenswrapper[4409]: E1203 14:26:08.835828 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:08.836272 master-0 kubenswrapper[4409]: I1203 14:26:08.836242 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.836333 master-0 kubenswrapper[4409]: E1203 14:26:08.836305 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:08.837061 master-0 kubenswrapper[4409]: I1203 14:26:08.837036 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:08.837139 master-0 kubenswrapper[4409]: E1203 14:26:08.837077 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:08.837258 master-0 kubenswrapper[4409]: I1203 14:26:08.837220 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:08.837345 master-0 kubenswrapper[4409]: E1203 14:26:08.837311 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:08.891039 master-0 kubenswrapper[4409]: I1203 14:26:08.890973 4409 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Dec 03 14:26:08.951105 master-0 kubenswrapper[4409]: I1203 14:26:08.951034 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.951105 master-0 kubenswrapper[4409]: I1203 14:26:08.951092 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.951105 master-0 kubenswrapper[4409]: I1203 14:26:08.951112 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951130 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951147 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951167 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951181 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951200 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951221 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.951401 master-0 kubenswrapper[4409]: I1203 14:26:08.951239 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.951592 master-0 kubenswrapper[4409]: I1203 14:26:08.951567 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.951626 master-0 kubenswrapper[4409]: I1203 14:26:08.951607 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.951656 master-0 kubenswrapper[4409]: I1203 14:26:08.951640 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.951686 master-0 kubenswrapper[4409]: I1203 14:26:08.951661 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.951721 master-0 kubenswrapper[4409]: I1203 14:26:08.951700 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.951759 master-0 kubenswrapper[4409]: I1203 14:26:08.951732 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:08.951759 master-0 kubenswrapper[4409]: I1203 14:26:08.951754 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.951814 master-0 kubenswrapper[4409]: I1203 14:26:08.951788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:08.951844 master-0 kubenswrapper[4409]: E1203 14:26:08.951825 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:08.951871 master-0 kubenswrapper[4409]: I1203 14:26:08.951849 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.951909 master-0 kubenswrapper[4409]: I1203 14:26:08.951873 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:08.951909 master-0 kubenswrapper[4409]: E1203 14:26:08.951896 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:08.951909 master-0 kubenswrapper[4409]: I1203 14:26:08.951908 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.952086 master-0 kubenswrapper[4409]: I1203 14:26:08.951649 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-utilities\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.952086 master-0 kubenswrapper[4409]: I1203 14:26:08.951983 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:08.952222 master-0 kubenswrapper[4409]: E1203 14:26:08.952139 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:08.952222 master-0 kubenswrapper[4409]: E1203 14:26:08.952211 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452138731 +0000 UTC m=+1.779201237 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:08.952296 master-0 kubenswrapper[4409]: E1203 14:26:08.952253 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452231214 +0000 UTC m=+1.779293720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:08.952385 master-0 kubenswrapper[4409]: E1203 14:26:08.952327 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:08.952447 master-0 kubenswrapper[4409]: E1203 14:26:08.952431 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:08.952544 master-0 kubenswrapper[4409]: E1203 14:26:08.952495 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45245815 +0000 UTC m=+1.779520686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:08.952544 master-0 kubenswrapper[4409]: E1203 14:26:08.952530 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:08.952544 master-0 kubenswrapper[4409]: E1203 14:26:08.952505 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:08.952661 master-0 kubenswrapper[4409]: E1203 14:26:08.952557 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452535543 +0000 UTC m=+1.779598069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:08.952661 master-0 kubenswrapper[4409]: E1203 14:26:08.952566 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:08.952730 master-0 kubenswrapper[4409]: E1203 14:26:08.952695 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452676947 +0000 UTC m=+1.779739453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:08.952766 master-0 kubenswrapper[4409]: E1203 14:26:08.952745 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452731318 +0000 UTC m=+1.779793844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:08.952799 master-0 kubenswrapper[4409]: E1203 14:26:08.952775 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.452768359 +0000 UTC m=+1.779830865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:08.952835 master-0 kubenswrapper[4409]: I1203 14:26:08.952790 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.952835 master-0 kubenswrapper[4409]: E1203 14:26:08.952813 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45279104 +0000 UTC m=+1.779853546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:08.953210 master-0 kubenswrapper[4409]: I1203 14:26:08.953157 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.953297 master-0 kubenswrapper[4409]: I1203 14:26:08.953263 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:08.953381 master-0 kubenswrapper[4409]: I1203 14:26:08.953352 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.953438 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.953512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.953779 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.953832 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.453820229 +0000 UTC m=+1.780882735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.954382 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.954478 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.954630 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.954720 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.954818 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.954912 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955454 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955518 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955590 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955619 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955643 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955670 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955691 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955719 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955744 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955775 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955835 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955857 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.955882 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.955987 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45596687 +0000 UTC m=+1.783029386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956077 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956093 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956107 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956109 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456078873 +0000 UTC m=+1.783141379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956158 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456137115 +0000 UTC m=+1.783199621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956165 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956200 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456169786 +0000 UTC m=+1.783232292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.956255 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956288 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956298 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956304 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956321 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456277279 +0000 UTC m=+1.783339825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.956345 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/8eee1d96-2f58-41a6-ae51-c158b29fc813-volume-directive-shadow\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.956354 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-utilities\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: I1203 14:26:08.956373 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.956423 master-0 kubenswrapper[4409]: E1203 14:26:08.956509 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456484804 +0000 UTC m=+1.783547320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956622 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456608588 +0000 UTC m=+1.783671094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956665 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956672 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456661039 +0000 UTC m=+1.783723545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956733 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956738 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-script-lib\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956774 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956810 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956869 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956918 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956921 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456910297 +0000 UTC m=+1.783972803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.956971 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.956986 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.456968328 +0000 UTC m=+1.784030844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957027 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957048 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-config-out\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957063 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45704277 +0000 UTC m=+1.784105286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957090 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957210 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957251 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957284 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957323 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957353 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957383 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957413 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957438 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957469 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957485 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957502 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957525 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.457513454 +0000 UTC m=+1.784575960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957547 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957554 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957581 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.457572535 +0000 UTC m=+1.784635041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957599 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957611 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957625 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957650 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957670 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957684 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957688 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.957709 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957712 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.957721 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.457711069 +0000 UTC m=+1.784773575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958490 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958514 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458501372 +0000 UTC m=+1.785564008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958604 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958608 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-catalog-content\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958615 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958630 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458623895 +0000 UTC m=+1.785686401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958632 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958665 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458651446 +0000 UTC m=+1.785714072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958620 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958690 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458678147 +0000 UTC m=+1.785740783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958714 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458701097 +0000 UTC m=+1.785763613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958533 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958764 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958802 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45879262 +0000 UTC m=+1.785855126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958764 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958822 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958835 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958852 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458843151 +0000 UTC m=+1.785905677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958874 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958904 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958933 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958937 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-service-ca-bundle\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958960 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958965 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.958986 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959030 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.458999556 +0000 UTC m=+1.786062172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.958965 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959049 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959057 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959059 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959076 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459068728 +0000 UTC m=+1.786131234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959087 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959098 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459084138 +0000 UTC m=+1.786146754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959121 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459110789 +0000 UTC m=+1.786173415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959128 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959144 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959165 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.45915455 +0000 UTC m=+1.786217146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959187 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959209 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459200052 +0000 UTC m=+1.786262718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959236 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959279 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959311 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459299814 +0000 UTC m=+1.786362440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959330 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959365 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959439 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959479 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-env-overrides\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959536 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959603 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959633 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459623454 +0000 UTC m=+1.786685960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959674 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959701 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959725 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959751 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959788 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.959827 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.459816779 +0000 UTC m=+1.786879285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.959970 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960038 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-proxy-tls\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960038 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960114 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-tmpfs\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960140 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960170 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960197 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960223 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960245 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960262 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960281 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960301 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960318 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960318 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960372 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960389 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960412 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960414 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960413 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960428 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460421216 +0000 UTC m=+1.787483722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960463 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960473 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: E1203 14:26:08.960491 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460480298 +0000 UTC m=+1.787542804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:08.963973 master-0 kubenswrapper[4409]: I1203 14:26:08.960515 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960545 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960564 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.4605503 +0000 UTC m=+1.787612856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960587 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460577701 +0000 UTC m=+1.787640327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960607 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460597681 +0000 UTC m=+1.787660377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960606 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960629 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960644 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460634642 +0000 UTC m=+1.787697268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960616 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-images\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960669 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960684 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960685 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/77430348-b53a-4898-8047-be8bb542a0a7-ovn-node-metrics-cert\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960727 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960737 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460725515 +0000 UTC m=+1.787788111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960767 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960771 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960808 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960835 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460826198 +0000 UTC m=+1.787888844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960782 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960851 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.960871 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.460864359 +0000 UTC m=+1.787926995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960892 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960919 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960944 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960967 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.960990 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961028 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961055 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961076 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961077 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-auth-proxy-config\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961083 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-stats-auth\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961099 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961119 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461107846 +0000 UTC m=+1.788170352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961143 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961158 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961206 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461186048 +0000 UTC m=+1.788248554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961208 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961232 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961250 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961270 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961307 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461297411 +0000 UTC m=+1.788360047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961271 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961322 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-out\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961348 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961351 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961364 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c6fa89f-268c-477b-9f04-238d2305cc89-mcc-auth-proxy-config\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961395 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461386414 +0000 UTC m=+1.788449050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961404 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961391 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961454 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6935a3f8-723e-46e6-8498-483f34bf0825-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961508 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961518 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961531 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961546 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961551 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461537378 +0000 UTC m=+1.788599984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961585 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961605 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961619 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46160919 +0000 UTC m=+1.788671806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961582 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/69b752ed-691c-4574-a01e-428d4bf85b75-cache\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961640 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961667 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461654301 +0000 UTC m=+1.788716827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961684 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461676472 +0000 UTC m=+1.788738988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961695 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961704 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961733 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961759 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961792 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961819 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961843 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961868 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961870 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961879 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961896 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-mcd-auth-proxy-config\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961938 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.461926699 +0000 UTC m=+1.788989315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961947 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.961980 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46197015 +0000 UTC m=+1.789032666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.961895 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962069 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962095 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962117 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962135 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962141 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962166 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462157735 +0000 UTC m=+1.789220241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962184 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962206 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962212 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962225 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962239 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462230647 +0000 UTC m=+1.789293163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962241 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962258 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962280 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462269399 +0000 UTC m=+1.789332005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962303 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962312 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962335 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962338 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46233049 +0000 UTC m=+1.789393016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962373 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962400 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462392742 +0000 UTC m=+1.789455238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962371 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962430 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962452 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962464 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962474 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962493 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962501 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462491075 +0000 UTC m=+1.789553681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962514 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-available-featuregates\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962525 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962596 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962631 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962635 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-cni-binary-copy\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962636 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911f6333-cdb0-425c-b79b-f892444b7097-catalog-content\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962641 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462628279 +0000 UTC m=+1.789690875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962702 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-tmp\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962693 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962731 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962753 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462742602 +0000 UTC m=+1.789805218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962785 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962815 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: I1203 14:26:08.962841 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.967888 master-0 kubenswrapper[4409]: E1203 14:26:08.962865 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.962869 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-metrics-certs\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.962902 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.462891476 +0000 UTC m=+1.789953992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.962865 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.962946 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.962915 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.962973 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963039 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963043 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963061 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46304183 +0000 UTC m=+1.790104366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963061 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c95705e3-17ef-40fe-89e8-22586a32621b-snapshots\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.962956 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963089 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.463077642 +0000 UTC m=+1.790140148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963114 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963118 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.463109412 +0000 UTC m=+1.790171938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963152 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963163 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ec89938d-35a5-46ba-8c63-12489db18cbd-service-ca\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963178 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963202 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963207 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963239 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.463228726 +0000 UTC m=+1.790291322 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963263 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963293 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963319 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963343 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963368 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963395 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963419 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963444 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963470 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963495 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963519 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963544 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963569 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963595 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963621 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963648 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963673 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963721 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963747 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963772 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963802 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963829 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963851 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963876 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963901 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963915 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963919 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963957 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.463946476 +0000 UTC m=+1.791009092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963956 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-certs\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.963923 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.963960 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964150 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964178 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6935a3f8-723e-46e6-8498-483f34bf0825-ovnkube-config\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964194 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964232 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964255 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-ovnkube-config\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964038 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964294 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964052 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.463991807 +0000 UTC m=+1.791054433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964316 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964326 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964354 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464342757 +0000 UTC m=+1.791405383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964379 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464369138 +0000 UTC m=+1.791431784 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964385 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff18a80-0b0f-40ab-862e-e8b1ab32040a-utilities\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964393 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964442 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964455 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964457 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964404 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964519 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-env-overrides\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964400 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464390779 +0000 UTC m=+1.791453385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964400 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964569 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464557083 +0000 UTC m=+1.791619679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964589 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464580224 +0000 UTC m=+1.791642850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964461 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964609 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464601885 +0000 UTC m=+1.791664521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964629 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464620445 +0000 UTC m=+1.791683071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964651 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464641666 +0000 UTC m=+1.791704292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964672 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964701 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964727 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464710978 +0000 UTC m=+1.791773604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964746 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964751 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464741889 +0000 UTC m=+1.791804545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964771 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464764329 +0000 UTC m=+1.791826835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964788 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46478046 +0000 UTC m=+1.791843106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964813 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46480524 +0000 UTC m=+1.791867876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.964830 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.464822291 +0000 UTC m=+1.791884937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964854 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964888 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964917 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964952 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964964 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-tls\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.964977 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965037 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965077 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.465064428 +0000 UTC m=+1.792126934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965082 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965119 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.465107679 +0000 UTC m=+1.792170285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965194 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965035 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965200 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.465190901 +0000 UTC m=+1.792253517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965251 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/15782f65-35d2-4e95-bf49-81541c683ffe-etc-tuned\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: E1203 14:26:08.965262 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.465251813 +0000 UTC m=+1.792314439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965282 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965315 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965348 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965374 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965400 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965425 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965450 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965478 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965506 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965535 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965572 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-auth-proxy-config\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965590 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965621 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965651 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965703 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965766 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.971689 master-0 kubenswrapper[4409]: I1203 14:26:08.965793 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965821 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965906 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965928 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/da583723-b3ad-4a6f-b586-09b739bd7f8c-webhook-cert\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965934 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.965965 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.965977 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966140 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466131708 +0000 UTC m=+1.793194214 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966036 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966166 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/82bd0ae5-b35d-47c8-b693-b27a9a56476d-cache\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966090 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966219 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46621066 +0000 UTC m=+1.793273166 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966228 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-cni-binary-copy\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966240 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966253 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966265 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966172 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966300 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466291003 +0000 UTC m=+1.793353599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966316 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b681889-eb2c-41fb-a1dc-69b99227b45b-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966330 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966038 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966040 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966341 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966383 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466366065 +0000 UTC m=+1.793428651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966388 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966409 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466400056 +0000 UTC m=+1.793462672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966419 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966433 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466423816 +0000 UTC m=+1.793486412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966461 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966474 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466465868 +0000 UTC m=+1.793528494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966492 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966521 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466514029 +0000 UTC m=+1.793576535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966536 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966568 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-catalog-content\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966639 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966651 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a12409a-0be3-4023-9df3-a0f091aac8dc-metrics-client-ca\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966681 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.466669173 +0000 UTC m=+1.793731739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966857 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966895 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.966909 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46689848 +0000 UTC m=+1.793960986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966951 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.966978 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967019 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967029 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967041 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967054 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467045314 +0000 UTC m=+1.794107820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967079 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467071365 +0000 UTC m=+1.794133871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967078 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967120 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967159 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467150417 +0000 UTC m=+1.794213033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967121 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967183 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967207 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967222 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467213439 +0000 UTC m=+1.794276025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967240 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967263 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967286 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967310 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967319 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967330 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967347 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967359 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467349413 +0000 UTC m=+1.794411919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967377 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967467 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967495 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967522 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967547 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967575 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967615 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a192c38a-4bfa-40fe-9a2d-d48260cf6443-utilities\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967630 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967639 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967658 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967670 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467662011 +0000 UTC m=+1.794724517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967675 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967691 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967712 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967713 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467701903 +0000 UTC m=+1.794764499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967732 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967744 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967759 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467751514 +0000 UTC m=+1.794814190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967781 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967794 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467787625 +0000 UTC m=+1.794850281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967792 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967811 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467801105 +0000 UTC m=+1.794863731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967832 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467823156 +0000 UTC m=+1.794885762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967841 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967853 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967869 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.467862427 +0000 UTC m=+1.794924943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967890 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967906 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-audit-log\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967917 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967949 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967973 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.46796574 +0000 UTC m=+1.795028246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.967945 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.967985 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968001 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968027 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468019962 +0000 UTC m=+1.795082588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968049 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968079 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968099 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968152 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468139795 +0000 UTC m=+1.795202311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968109 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968180 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968213 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468205287 +0000 UTC m=+1.795267913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968238 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968245 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968263 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec89938d-35a5-46ba-8c63-12489db18cbd-serving-cert\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968278 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468268879 +0000 UTC m=+1.795331385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968268 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968317 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: E1203 14:26:08.968347 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468339271 +0000 UTC m=+1.795401787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968315 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968357 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-textfile\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968382 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968403 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-iptables-alerter-script\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968422 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968456 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968479 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.975192 master-0 kubenswrapper[4409]: I1203 14:26:08.968507 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968537 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968564 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968588 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968591 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968592 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968618 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468610868 +0000 UTC m=+1.795673374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968636 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968653 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468643799 +0000 UTC m=+1.795706445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968682 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968704 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468698551 +0000 UTC m=+1.795761057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968679 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968722 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968735 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968747 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468740302 +0000 UTC m=+1.795802808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968763 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968774 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968785 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968808 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.468802224 +0000 UTC m=+1.795864730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968830 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968849 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968871 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968894 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-catalog-content\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968911 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968961 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968983 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6b681889-eb2c-41fb-a1dc-69b99227b45b-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968993 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.968964 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/803897bb-580e-4f7a-9be2-583fc607d1f6-operand-assets\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968932 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.968981 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969036 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969069 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469059261 +0000 UTC m=+1.796121887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969093 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969124 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969155 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969196 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469164724 +0000 UTC m=+1.796227250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969201 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969237 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469229536 +0000 UTC m=+1.796292042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969241 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969269 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969287 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969290 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469285008 +0000 UTC m=+1.796347514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969306 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-metrics-client-ca\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969324 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969357 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969386 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969410 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469404401 +0000 UTC m=+1.796466907 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969386 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969441 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969462 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969483 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969503 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969524 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969542 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969560 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969569 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969565 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969576 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969606 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969649 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969658 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469651538 +0000 UTC m=+1.796714044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969695 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469682259 +0000 UTC m=+1.796744775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969704 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969712 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469704779 +0000 UTC m=+1.796767295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969764 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969792 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969806 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.469797592 +0000 UTC m=+1.796860098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969825 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969846 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969867 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.969922 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.969958 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970040 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470025749 +0000 UTC m=+1.797088285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970074 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970165 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970227 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970275 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970285 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970311 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470303076 +0000 UTC m=+1.797365682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970237 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d7d6a05e-beee-40e9-b376-5c22e285b27a-serviceca\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970328 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470321987 +0000 UTC m=+1.797384613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970306 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970364 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970392 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970403 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970418 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970447 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970471 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470449751 +0000 UTC m=+1.797512297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970509 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970552 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970594 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970627 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-daemon-config\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970635 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-machine-approver-tls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970647 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970652 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970638 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/da583723-b3ad-4a6f-b586-09b739bd7f8c-ovnkube-identity-cm\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970714 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470703338 +0000 UTC m=+1.797765854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970695 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970739 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970745 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970769 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970797 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.47077058 +0000 UTC m=+1.797833116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970826 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970827 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970831 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470821421 +0000 UTC m=+1.797883947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970884 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470868852 +0000 UTC m=+1.797931358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970910 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.970953 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.470940994 +0000 UTC m=+1.798003540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.970912 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971086 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971131 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971174 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971211 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971250 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.971269 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971292 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.971309 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471299705 +0000 UTC m=+1.798362211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971335 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.971344 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: E1203 14:26:08.971382 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:08.978723 master-0 kubenswrapper[4409]: I1203 14:26:08.971388 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971300 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74e39dce-29d5-4b2a-ab19-386b6cdae94d-metrics-client-ca\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971420 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471412858 +0000 UTC m=+1.798475364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971440 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971369 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971466 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971486 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.47147252 +0000 UTC m=+1.798535036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971505 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971516 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971539 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471529911 +0000 UTC m=+1.798592487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971505 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e97e1725-cb55-4ce3-952d-a4fd0731577d-metrics-tls\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971545 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/eecc43f5-708f-4395-98cc-696b243d6321-node-bootstrap-token\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971551 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971565 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971584 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471577023 +0000 UTC m=+1.798639649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971650 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471641844 +0000 UTC m=+1.798704370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971679 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971688 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-metrics-client-ca\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971692 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-metrics-client-ca\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971708 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971762 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971776 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971807 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.471799759 +0000 UTC m=+1.798862265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971848 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971890 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971932 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971895 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.971971 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.971930 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972042 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972055 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472042176 +0000 UTC m=+1.799104692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972066 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/77430348-b53a-4898-8047-be8bb542a0a7-env-overrides\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972075 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972084 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972093 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472085477 +0000 UTC m=+1.799148103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972129 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972141 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472125758 +0000 UTC m=+1.799188304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972173 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972218 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972260 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972304 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972334 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472322464 +0000 UTC m=+1.799385030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972365 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972392 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972404 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972429 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472419356 +0000 UTC m=+1.799481972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972434 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-metrics-client-ca\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972458 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972470 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972493 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972507 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472498639 +0000 UTC m=+1.799561255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972528 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972547 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972578 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472570521 +0000 UTC m=+1.799633157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972604 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972612 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972629 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972638 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972644 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472636933 +0000 UTC m=+1.799699539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972678 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972705 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972762 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972789 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972824 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972871 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972791 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972905 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972926 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.972941 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.472928471 +0000 UTC m=+1.799991057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972963 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972994 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.972993 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/19c2a40b-213c-42f1-9459-87c2e780a75f-whereabouts-configmap\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973044 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973045 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973072 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973104 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973115 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-default-certificate\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473106976 +0000 UTC m=+1.800169512 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973142 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973158 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973165 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973172 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973216 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973196 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973228 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473219909 +0000 UTC m=+1.800282415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973279 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.47326771 +0000 UTC m=+1.800330326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: I1203 14:26:08.973303 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973319 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973344 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473337842 +0000 UTC m=+1.800400348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973421 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473403644 +0000 UTC m=+1.800466180 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973461 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473449876 +0000 UTC m=+1.800512412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:09.043301 master-0 kubenswrapper[4409]: E1203 14:26:08.973513 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.473504277 +0000 UTC m=+1.800566793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:09.075085 master-0 kubenswrapper[4409]: I1203 14:26:09.075020 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075085 master-0 kubenswrapper[4409]: I1203 14:26:09.075070 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075085 master-0 kubenswrapper[4409]: I1203 14:26:09.075092 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075173 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075172 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-log-socket\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075222 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075243 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075292 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075308 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.075357 master-0 kubenswrapper[4409]: I1203 14:26:09.075331 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-netd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075568 master-0 kubenswrapper[4409]: I1203 14:26:09.075371 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075568 master-0 kubenswrapper[4409]: I1203 14:26:09.075487 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-os-release\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.075568 master-0 kubenswrapper[4409]: I1203 14:26:09.075535 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075650 master-0 kubenswrapper[4409]: I1203 14:26:09.075569 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-systemd-units\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.075650 master-0 kubenswrapper[4409]: I1203 14:26:09.075587 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075650 master-0 kubenswrapper[4409]: I1203 14:26:09.075601 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.075650 master-0 kubenswrapper[4409]: I1203 14:26:09.075631 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.075762 master-0 kubenswrapper[4409]: I1203 14:26:09.075642 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-etc-kubernetes\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075762 master-0 kubenswrapper[4409]: I1203 14:26:09.075712 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-k8s-cni-cncf-io\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.075762 master-0 kubenswrapper[4409]: I1203 14:26:09.075749 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-root\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.076300 master-0 kubenswrapper[4409]: I1203 14:26:09.076265 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.076404 master-0 kubenswrapper[4409]: I1203 14:26:09.076375 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:09.076469 master-0 kubenswrapper[4409]: I1203 14:26:09.076440 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.076501 master-0 kubenswrapper[4409]: I1203 14:26:09.076485 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.076542 master-0 kubenswrapper[4409]: I1203 14:26:09.076515 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/e97e1725-cb55-4ce3-952d-a4fd0731577d-host-etc-kube\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:09.076573 master-0 kubenswrapper[4409]: I1203 14:26:09.076504 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-bin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.076609 master-0 kubenswrapper[4409]: I1203 14:26:09.076578 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:09.076609 master-0 kubenswrapper[4409]: I1203 14:26:09.076579 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-multus-certs\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.076678 master-0 kubenswrapper[4409]: I1203 14:26:09.076555 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7d6a05e-beee-40e9-b376-5c22e285b27a-host\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:09.076713 master-0 kubenswrapper[4409]: I1203 14:26:09.076669 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-cni-multus\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.077242 master-0 kubenswrapper[4409]: I1203 14:26:09.077196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.077294 master-0 kubenswrapper[4409]: I1203 14:26:09.077271 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:09.077327 master-0 kubenswrapper[4409]: I1203 14:26:09.077290 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.077383 master-0 kubenswrapper[4409]: I1203 14:26:09.077352 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6b681889-eb2c-41fb-a1dc-69b99227b45b-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:09.077561 master-0 kubenswrapper[4409]: I1203 14:26:09.077522 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.077615 master-0 kubenswrapper[4409]: I1203 14:26:09.077580 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-containers\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.077732 master-0 kubenswrapper[4409]: I1203 14:26:09.077706 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.077800 master-0 kubenswrapper[4409]: I1203 14:26:09.077764 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-sys\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.077847 master-0 kubenswrapper[4409]: I1203 14:26:09.077809 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:09.077847 master-0 kubenswrapper[4409]: I1203 14:26:09.077747 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-rootfs\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:09.077910 master-0 kubenswrapper[4409]: I1203 14:26:09.077871 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.077978 master-0 kubenswrapper[4409]: I1203 14:26:09.077920 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078075 master-0 kubenswrapper[4409]: I1203 14:26:09.078058 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078119 master-0 kubenswrapper[4409]: I1203 14:26:09.078063 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-var-lib-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.078119 master-0 kubenswrapper[4409]: I1203 14:26:09.078086 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.078189 master-0 kubenswrapper[4409]: I1203 14:26:09.078126 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.078189 master-0 kubenswrapper[4409]: I1203 14:26:09.078127 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-run\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078189 master-0 kubenswrapper[4409]: I1203 14:26:09.078177 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-socket-dir-parent\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078198 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078218 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-node-log\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078268 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078310 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078331 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-kubernetes\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078410 master-0 kubenswrapper[4409]: I1203 14:26:09.078350 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078445 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078463 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-hostroot\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078484 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078511 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysconfig\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078530 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078579 master-0 kubenswrapper[4409]: I1203 14:26:09.078555 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-system-cni-dir\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.078743 master-0 kubenswrapper[4409]: I1203 14:26:09.078608 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-multus-conf-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.078743 master-0 kubenswrapper[4409]: I1203 14:26:09.078628 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.078743 master-0 kubenswrapper[4409]: I1203 14:26:09.078668 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.078743 master-0 kubenswrapper[4409]: I1203 14:26:09.078675 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-lib-modules\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078743 master-0 kubenswrapper[4409]: I1203 14:26:09.078734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:09.078880 master-0 kubenswrapper[4409]: I1203 14:26:09.078748 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/69b752ed-691c-4574-a01e-428d4bf85b75-etc-docker\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.078880 master-0 kubenswrapper[4409]: I1203 14:26:09.078775 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:09.078880 master-0 kubenswrapper[4409]: I1203 14:26:09.078738 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-slash\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.078880 master-0 kubenswrapper[4409]: I1203 14:26:09.078837 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.078880 master-0 kubenswrapper[4409]: I1203 14:26:09.078872 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-sys\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.079048 master-0 kubenswrapper[4409]: I1203 14:26:09.078881 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.079085 master-0 kubenswrapper[4409]: I1203 14:26:09.079053 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-host\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.079402 master-0 kubenswrapper[4409]: I1203 14:26:09.079365 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:09.079457 master-0 kubenswrapper[4409]: I1203 14:26:09.079420 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:09.079591 master-0 kubenswrapper[4409]: I1203 14:26:09.079560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.079651 master-0 kubenswrapper[4409]: I1203 14:26:09.079629 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:09.079813 master-0 kubenswrapper[4409]: I1203 14:26:09.079786 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.079875 master-0 kubenswrapper[4409]: I1203 14:26:09.079845 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-cni-bin\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.079875 master-0 kubenswrapper[4409]: I1203 14:26:09.079864 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-node-exporter-wtmp\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.079938 master-0 kubenswrapper[4409]: I1203 14:26:09.079793 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-containers\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:09.079938 master-0 kubenswrapper[4409]: I1203 14:26:09.079787 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42c95e54-b4ba-4b19-a97c-abcec840ac5d-hosts-file\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:09.080027 master-0 kubenswrapper[4409]: I1203 14:26:09.079985 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.080190 master-0 kubenswrapper[4409]: I1203 14:26:09.080151 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.080190 master-0 kubenswrapper[4409]: I1203 14:26:09.080172 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-ovn\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.080254 master-0 kubenswrapper[4409]: I1203 14:26:09.080214 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-run-netns\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.080360 master-0 kubenswrapper[4409]: I1203 14:26:09.080325 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:09.080466 master-0 kubenswrapper[4409]: I1203 14:26:09.080438 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.080570 master-0 kubenswrapper[4409]: I1203 14:26:09.080546 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.080603 master-0 kubenswrapper[4409]: I1203 14:26:09.080560 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/19c2a40b-213c-42f1-9459-87c2e780a75f-cnibin\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.080818 master-0 kubenswrapper[4409]: I1203 14:26:09.080598 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-etc-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.080818 master-0 kubenswrapper[4409]: I1203 14:26:09.080668 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.080818 master-0 kubenswrapper[4409]: I1203 14:26:09.080782 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-netns\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.081073 master-0 kubenswrapper[4409]: I1203 14:26:09.081045 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.081129 master-0 kubenswrapper[4409]: I1203 14:26:09.081100 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:09.081185 master-0 kubenswrapper[4409]: I1203 14:26:09.081159 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.081232 master-0 kubenswrapper[4409]: I1203 14:26:09.081210 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-systemd\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.081262 master-0 kubenswrapper[4409]: I1203 14:26:09.081108 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-run-ovn-kubernetes\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.081262 master-0 kubenswrapper[4409]: I1203 14:26:09.081163 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/82bd0ae5-b35d-47c8-b693-b27a9a56476d-etc-docker\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:09.081322 master-0 kubenswrapper[4409]: I1203 14:26:09.081255 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.081352 master-0 kubenswrapper[4409]: I1203 14:26:09.081317 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.081352 master-0 kubenswrapper[4409]: I1203 14:26:09.081327 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:09.081417 master-0 kubenswrapper[4409]: I1203 14:26:09.081395 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.081489 master-0 kubenswrapper[4409]: I1203 14:26:09.081461 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:09.081556 master-0 kubenswrapper[4409]: I1203 14:26:09.081534 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.081590 master-0 kubenswrapper[4409]: I1203 14:26:09.081568 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-openvswitch\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.081740 master-0 kubenswrapper[4409]: I1203 14:26:09.081712 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-cnibin\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.082188 master-0 kubenswrapper[4409]: I1203 14:26:09.082154 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.082252 master-0 kubenswrapper[4409]: I1203 14:26:09.082232 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.082341 master-0 kubenswrapper[4409]: I1203 14:26:09.082310 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-run-systemd\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.082537 master-0 kubenswrapper[4409]: I1203 14:26:09.082504 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.082570 master-0 kubenswrapper[4409]: I1203 14:26:09.082528 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-modprobe-d\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.082605 master-0 kubenswrapper[4409]: I1203 14:26:09.082582 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-node-pullsecrets\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.082821 master-0 kubenswrapper[4409]: I1203 14:26:09.082790 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.082882 master-0 kubenswrapper[4409]: I1203 14:26:09.082858 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24dfafc9-86a9-450e-ac62-a871138106c0-audit-dir\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.082974 master-0 kubenswrapper[4409]: I1203 14:26:09.082917 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.082974 master-0 kubenswrapper[4409]: I1203 14:26:09.082964 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.083050 master-0 kubenswrapper[4409]: I1203 14:26:09.082977 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-dir\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.083086 master-0 kubenswrapper[4409]: I1203 14:26:09.083025 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:09.083220 master-0 kubenswrapper[4409]: I1203 14:26:09.083069 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-os-release\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.083220 master-0 kubenswrapper[4409]: I1203 14:26:09.083207 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.083285 master-0 kubenswrapper[4409]: I1203 14:26:09.083073 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:09.083285 master-0 kubenswrapper[4409]: I1203 14:26:09.083256 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.083423 master-0 kubenswrapper[4409]: I1203 14:26:09.083399 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-etc-sysctl-conf\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.083465 master-0 kubenswrapper[4409]: I1203 14:26:09.083430 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-system-cni-dir\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.083465 master-0 kubenswrapper[4409]: I1203 14:26:09.083452 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:09.083528 master-0 kubenswrapper[4409]: I1203 14:26:09.083473 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ec89938d-35a5-46ba-8c63-12489db18cbd-etc-ssl-certs\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:09.083725 master-0 kubenswrapper[4409]: I1203 14:26:09.083698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.083758 master-0 kubenswrapper[4409]: I1203 14:26:09.083747 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/77430348-b53a-4898-8047-be8bb542a0a7-host-kubelet\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.083830 master-0 kubenswrapper[4409]: I1203 14:26:09.083805 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:09.083876 master-0 kubenswrapper[4409]: I1203 14:26:09.083852 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.083921 master-0 kubenswrapper[4409]: I1203 14:26:09.083904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-host-slash\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:09.084022 master-0 kubenswrapper[4409]: I1203 14:26:09.083981 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.084112 master-0 kubenswrapper[4409]: I1203 14:26:09.084084 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.084156 master-0 kubenswrapper[4409]: I1203 14:26:09.084138 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c777c9de-1ace-46be-b5c2-c71d252f53f4-host-var-lib-kubelet\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.084299 master-0 kubenswrapper[4409]: I1203 14:26:09.084272 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit-dir\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.084331 master-0 kubenswrapper[4409]: I1203 14:26:09.084263 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/15782f65-35d2-4e95-bf49-81541c683ffe-var-lib-kubelet\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.204938 master-0 kubenswrapper[4409]: I1203 14:26:09.204878 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:09.205208 master-0 kubenswrapper[4409]: I1203 14:26:09.204967 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:09Z","lastTransitionTime":"2025-12-03T14:26:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:09.208209 master-0 kubenswrapper[4409]: E1203 14:26:09.208160 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:09.217954 master-0 kubenswrapper[4409]: E1203 14:26:09.217892 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:09.217954 master-0 kubenswrapper[4409]: E1203 14:26:09.217957 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.218258 master-0 kubenswrapper[4409]: E1203 14:26:09.217985 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.218258 master-0 kubenswrapper[4409]: E1203 14:26:09.218130 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.718093943 +0000 UTC m=+2.045156479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.218258 master-0 kubenswrapper[4409]: E1203 14:26:09.218173 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Dec 03 14:26:09.218532 master-0 kubenswrapper[4409]: I1203 14:26:09.218496 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqqf2\" (UniqueName: \"kubernetes.io/projected/b71ac8a5-987d-4eba-8bc0-a091f0a0de16-kube-api-access-tqqf2\") pod \"node-exporter-b62gf\" (UID: \"b71ac8a5-987d-4eba-8bc0-a091f0a0de16\") " pod="openshift-monitoring/node-exporter-b62gf" Dec 03 14:26:09.219169 master-0 kubenswrapper[4409]: E1203 14:26:09.219134 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:09.219234 master-0 kubenswrapper[4409]: E1203 14:26:09.219180 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.219234 master-0 kubenswrapper[4409]: E1203 14:26:09.219204 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.219365 master-0 kubenswrapper[4409]: E1203 14:26:09.219281 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.719262487 +0000 UTC m=+2.046325033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.219734 master-0 kubenswrapper[4409]: E1203 14:26:09.219692 4409 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-03T14:26:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"5a54df78-64a7-4b65-a168-d6e871bf4ce7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:09.219734 master-0 kubenswrapper[4409]: E1203 14:26:09.219731 4409 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 03 14:26:09.220841 master-0 kubenswrapper[4409]: E1203 14:26:09.220793 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:09.222570 master-0 kubenswrapper[4409]: E1203 14:26:09.222532 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:09.222570 master-0 kubenswrapper[4409]: E1203 14:26:09.222571 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.222702 master-0 kubenswrapper[4409]: E1203 14:26:09.222586 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.222702 master-0 kubenswrapper[4409]: E1203 14:26:09.222654 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.722634542 +0000 UTC m=+2.049697048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.223301 master-0 kubenswrapper[4409]: I1203 14:26:09.223272 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-bound-sa-token\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227441 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227486 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227501 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227569 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.727547882 +0000 UTC m=+2.054610388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227609 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227619 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227627 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.236041 master-0 kubenswrapper[4409]: E1203 14:26:09.227655 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.727646274 +0000 UTC m=+2.054708850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.239692 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.239734 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.239799 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.739777788 +0000 UTC m=+2.066840304 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240129 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240145 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240156 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240192 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.74018259 +0000 UTC m=+2.067245096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240342 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240355 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240365 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240398 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.740390106 +0000 UTC m=+2.067452612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240432 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240473 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240493 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240574 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.74054782 +0000 UTC m=+2.067610366 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240632 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240650 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240668 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240726 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.740707115 +0000 UTC m=+2.067769681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240783 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240803 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240817 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.240871 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.740854039 +0000 UTC m=+2.067916595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: I1203 14:26:09.242321 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6zfj\" (UniqueName: \"kubernetes.io/projected/d7d6a05e-beee-40e9-b376-5c22e285b27a-kube-api-access-l6zfj\") pod \"node-ca-4p4zh\" (UID: \"d7d6a05e-beee-40e9-b376-5c22e285b27a\") " pod="openshift-image-registry/node-ca-4p4zh" Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.242435 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.242456 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.242474 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: E1203 14:26:09.242528 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.742509826 +0000 UTC m=+2.069572372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.244029 master-0 kubenswrapper[4409]: I1203 14:26:09.243540 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wc6r\" (UniqueName: \"kubernetes.io/projected/6935a3f8-723e-46e6-8498-483f34bf0825-kube-api-access-8wc6r\") pod \"ovnkube-control-plane-f9f7f4946-48mrg\" (UID: \"6935a3f8-723e-46e6-8498-483f34bf0825\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-48mrg" Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244087 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244112 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244122 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244160 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.744148032 +0000 UTC m=+2.071210528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244203 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244215 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244221 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.244244 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.744238255 +0000 UTC m=+2.071300761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: I1203 14:26:09.245147 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wddf4\" (UniqueName: \"kubernetes.io/projected/8a12409a-0be3-4023-9df3-a0f091aac8dc-kube-api-access-wddf4\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.245224 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.245235 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.245242 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.245277 master-0 kubenswrapper[4409]: E1203 14:26:09.245239 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:09.245834 master-0 kubenswrapper[4409]: I1203 14:26:09.245304 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szdzx\" (UniqueName: \"kubernetes.io/projected/eecc43f5-708f-4395-98cc-696b243d6321-kube-api-access-szdzx\") pod \"machine-config-server-pvrfs\" (UID: \"eecc43f5-708f-4395-98cc-696b243d6321\") " pod="openshift-machine-config-operator/machine-config-server-pvrfs" Dec 03 14:26:09.245834 master-0 kubenswrapper[4409]: I1203 14:26:09.245705 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvzf2\" (UniqueName: \"kubernetes.io/projected/b3c1ebb9-f052-410b-a999-45e9b75b0e58-kube-api-access-mvzf2\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:09.245834 master-0 kubenswrapper[4409]: E1203 14:26:09.245767 4409 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:09.245834 master-0 kubenswrapper[4409]: I1203 14:26:09.245786 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpv9\" (UniqueName: \"kubernetes.io/projected/6ef37bba-85d9-4303-80c0-aac3dc49d3d9-kube-api-access-kcpv9\") pod \"iptables-alerter-n24qb\" (UID: \"6ef37bba-85d9-4303-80c0-aac3dc49d3d9\") " pod="openshift-network-operator/iptables-alerter-n24qb" Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.245269 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.745260784 +0000 UTC m=+2.072323340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.245295 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.246062 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.246079 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.246136 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.746116828 +0000 UTC m=+2.073179344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.245908 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.246164 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.246172 master-0 kubenswrapper[4409]: E1203 14:26:09.246175 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.246576 master-0 kubenswrapper[4409]: E1203 14:26:09.246204 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.74619574 +0000 UTC m=+2.073258266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.253163 master-0 kubenswrapper[4409]: I1203 14:26:09.247168 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbch4\" (UniqueName: \"kubernetes.io/projected/799e819f-f4b2-4ac9-8fa4-7d4da7a79285-kube-api-access-cbch4\") pod \"machine-config-daemon-2ztl9\" (UID: \"799e819f-f4b2-4ac9-8fa4-7d4da7a79285\") " pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:26:09.253163 master-0 kubenswrapper[4409]: I1203 14:26:09.252868 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqnb7\" (UniqueName: \"kubernetes.io/projected/da583723-b3ad-4a6f-b586-09b739bd7f8c-kube-api-access-gqnb7\") pod \"network-node-identity-c8csx\" (UID: \"da583723-b3ad-4a6f-b586-09b739bd7f8c\") " pod="openshift-network-node-identity/network-node-identity-c8csx" Dec 03 14:26:09.254999 master-0 kubenswrapper[4409]: I1203 14:26:09.254898 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57rrp\" (UniqueName: \"kubernetes.io/projected/5c00a797-4c60-43dd-bd04-16b2c6f1b6a8-kube-api-access-57rrp\") pod \"router-default-54f97f57-rr9px\" (UID: \"5c00a797-4c60-43dd-bd04-16b2c6f1b6a8\") " pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:09.255108 master-0 kubenswrapper[4409]: E1203 14:26:09.255017 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.255108 master-0 kubenswrapper[4409]: E1203 14:26:09.255033 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.255108 master-0 kubenswrapper[4409]: E1203 14:26:09.255046 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.255108 master-0 kubenswrapper[4409]: E1203 14:26:09.255094 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.755079112 +0000 UTC m=+2.082141618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255644 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255678 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255690 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255749 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.755729481 +0000 UTC m=+2.082791987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: I1203 14:26:09.255801 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-kube-api-access-97xsn\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255921 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255934 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255961 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.755953687 +0000 UTC m=+2.083016183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255985 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.255994 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256014 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256036 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756030239 +0000 UTC m=+2.083092745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: I1203 14:26:09.256352 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cvkj\" (UniqueName: \"kubernetes.io/projected/4669137a-fbc4-41e1-8eeb-5f06b9da2641-kube-api-access-7cvkj\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256429 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256441 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: I1203 14:26:09.256444 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-bound-sa-token\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256466 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756458741 +0000 UTC m=+2.083521247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: I1203 14:26:09.256473 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd79t\" (UniqueName: \"kubernetes.io/projected/829d285f-d532-45e4-b1ec-54adbc21b9f9-kube-api-access-wd79t\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256536 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256545 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256562 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256572 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256601 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256608 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756597165 +0000 UTC m=+2.083659761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256614 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256549 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256623 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256626 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256657 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756646087 +0000 UTC m=+2.083708593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256674 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756667197 +0000 UTC m=+2.083729803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256684 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256695 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256704 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256724 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256731 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.756722939 +0000 UTC m=+2.083785535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256736 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256743 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.256807 master-0 kubenswrapper[4409]: E1203 14:26:09.256769 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.75676287 +0000 UTC m=+2.083825376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.258097 master-0 kubenswrapper[4409]: I1203 14:26:09.256874 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtgh\" (UniqueName: \"kubernetes.io/projected/15782f65-35d2-4e95-bf49-81541c683ffe-kube-api-access-5jtgh\") pod \"tuned-7zkbg\" (UID: \"15782f65-35d2-4e95-bf49-81541c683ffe\") " pod="openshift-cluster-node-tuning-operator/tuned-7zkbg" Dec 03 14:26:09.258097 master-0 kubenswrapper[4409]: I1203 14:26:09.257583 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4tx\" (UniqueName: \"kubernetes.io/projected/44af6af5-cecb-4dc4-b793-e8e350f8a47d-kube-api-access-kk4tx\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:09.258598 master-0 kubenswrapper[4409]: E1203 14:26:09.258573 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.258598 master-0 kubenswrapper[4409]: E1203 14:26:09.258592 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.258598 master-0 kubenswrapper[4409]: E1203 14:26:09.258601 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.258708 master-0 kubenswrapper[4409]: E1203 14:26:09.258636 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.758625173 +0000 UTC m=+2.085687719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.259491 master-0 kubenswrapper[4409]: I1203 14:26:09.259371 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p667q\" (UniqueName: \"kubernetes.io/projected/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-api-access-p667q\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:09.259886 master-0 kubenswrapper[4409]: I1203 14:26:09.259848 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbdtx\" (UniqueName: \"kubernetes.io/projected/19c2a40b-213c-42f1-9459-87c2e780a75f-kube-api-access-mbdtx\") pod \"multus-additional-cni-plugins-42hmk\" (UID: \"19c2a40b-213c-42f1-9459-87c2e780a75f\") " pod="openshift-multus/multus-additional-cni-plugins-42hmk" Dec 03 14:26:09.260782 master-0 kubenswrapper[4409]: E1203 14:26:09.260749 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:09.260782 master-0 kubenswrapper[4409]: E1203 14:26:09.260779 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.260863 master-0 kubenswrapper[4409]: E1203 14:26:09.260794 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.260863 master-0 kubenswrapper[4409]: E1203 14:26:09.260857 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.760838826 +0000 UTC m=+2.087901342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.261633 master-0 kubenswrapper[4409]: E1203 14:26:09.261595 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.261633 master-0 kubenswrapper[4409]: E1203 14:26:09.261618 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.261633 master-0 kubenswrapper[4409]: E1203 14:26:09.261627 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.261773 master-0 kubenswrapper[4409]: E1203 14:26:09.261662 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.761653499 +0000 UTC m=+2.088715995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.261773 master-0 kubenswrapper[4409]: I1203 14:26:09.261679 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpl5j\" (UniqueName: \"kubernetes.io/projected/4df2889c-99f7-402a-9d50-18ccf427179c-kube-api-access-lpl5j\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:09.262506 master-0 kubenswrapper[4409]: I1203 14:26:09.262463 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7lp2\" (UniqueName: \"kubernetes.io/projected/74e39dce-29d5-4b2a-ab19-386b6cdae94d-kube-api-access-w7lp2\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:09.265993 master-0 kubenswrapper[4409]: E1203 14:26:09.265959 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.265993 master-0 kubenswrapper[4409]: E1203 14:26:09.265982 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.265993 master-0 kubenswrapper[4409]: E1203 14:26:09.265992 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.266144 master-0 kubenswrapper[4409]: E1203 14:26:09.266041 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.766031393 +0000 UTC m=+2.093093899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.268963 master-0 kubenswrapper[4409]: E1203 14:26:09.268926 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.268963 master-0 kubenswrapper[4409]: E1203 14:26:09.268954 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.268963 master-0 kubenswrapper[4409]: E1203 14:26:09.268966 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.269190 master-0 kubenswrapper[4409]: E1203 14:26:09.269033 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:09.769019848 +0000 UTC m=+2.096082364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.413386 master-0 kubenswrapper[4409]: I1203 14:26:09.413337 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:09.497886 master-0 kubenswrapper[4409]: I1203 14:26:09.497830 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:09.497886 master-0 kubenswrapper[4409]: I1203 14:26:09.497884 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: I1203 14:26:09.497911 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: I1203 14:26:09.497931 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: I1203 14:26:09.497972 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: I1203 14:26:09.497993 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498090 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498116 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498145 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498166 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498194 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.498170476 +0000 UTC m=+2.825232982 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498114 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498273 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498229 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.498208097 +0000 UTC m=+2.825270603 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498380 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.498368462 +0000 UTC m=+2.825430968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:09.498345 master-0 kubenswrapper[4409]: E1203 14:26:09.498394 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.498387122 +0000 UTC m=+2.825449628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498413 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.498477 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.498508 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.498424193 +0000 UTC m=+2.825486919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498620 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498773 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498829 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498876 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498917 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.498973 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499087 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499124 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499158 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499175 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499199 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499192155 +0000 UTC m=+2.826254661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499215 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499208605 +0000 UTC m=+2.826271111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499236 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499270 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499290 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499313 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499334 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499355 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499374 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499378 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499392 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499415 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499431 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499415861 +0000 UTC m=+2.826478557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499461 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499473 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499484 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499478003 +0000 UTC m=+2.826540509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499517 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: E1203 14:26:09.499538 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499533085 +0000 UTC m=+2.826595591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499532 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499577 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499608 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499629 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499654 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499677 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:09.499612 master-0 kubenswrapper[4409]: I1203 14:26:09.499708 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499729 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499752 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499772 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499811 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499833 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499853 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499873 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499916 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.499951 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499608 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500065 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500073 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500106 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500151 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500160 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500076 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500225 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500242 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500241 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500265 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499726 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500291 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500170 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500152 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499767 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499792 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500352 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499839 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499875 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500393 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499914 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499946 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499969 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.499956997 +0000 UTC m=+2.827019703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499987 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.500689 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500708 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500723 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500701098 +0000 UTC m=+2.827763614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500026 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499996 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500041 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500207 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500770 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500755769 +0000 UTC m=+2.827818465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500776 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499656 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499753 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.500798 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.499851 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500817 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500806931 +0000 UTC m=+2.827869627 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500865 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500855902 +0000 UTC m=+2.827918608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500864 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500883 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500874203 +0000 UTC m=+2.827936919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500899 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500891993 +0000 UTC m=+2.827954719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500925 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500915854 +0000 UTC m=+2.827978560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500946 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500938344 +0000 UTC m=+2.828001060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500967 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500959905 +0000 UTC m=+2.828022621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.500988 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.500979876 +0000 UTC m=+2.828042572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501025 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501000876 +0000 UTC m=+2.828063592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501044 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501038257 +0000 UTC m=+2.828100973 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501064 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501056968 +0000 UTC m=+2.828119584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501082 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501076328 +0000 UTC m=+2.828139044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501102 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501094269 +0000 UTC m=+2.828156975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501113269 +0000 UTC m=+2.828175975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501142 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50113589 +0000 UTC m=+2.828198476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501161 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501154031 +0000 UTC m=+2.828216727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501181 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501173461 +0000 UTC m=+2.828236167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501203 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501192952 +0000 UTC m=+2.828255648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501223 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501215182 +0000 UTC m=+2.828277888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501239 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501234313 +0000 UTC m=+2.828297009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501255 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501249823 +0000 UTC m=+2.828312519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501273 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501265814 +0000 UTC m=+2.828328520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501291 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501284784 +0000 UTC m=+2.828347490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501309 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501301845 +0000 UTC m=+2.828364551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501327 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501320385 +0000 UTC m=+2.828383091 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501346 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501339326 +0000 UTC m=+2.828402052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501363 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501356486 +0000 UTC m=+2.828419102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501416 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501458 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501469 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501457599 +0000 UTC m=+2.828520265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501504 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50149163 +0000 UTC m=+2.828554336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501514 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501521 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501512861 +0000 UTC m=+2.828575577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501516 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501555 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501546202 +0000 UTC m=+2.828608908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501547 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501575 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501567982 +0000 UTC m=+2.828630588 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501578 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501596 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501608 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501598223 +0000 UTC m=+2.828660749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501636 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501637 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501667 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501658395 +0000 UTC m=+2.828721001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501686 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501711 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501767 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501738 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501778 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501769218 +0000 UTC m=+2.828831744 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501803 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501817 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501807419 +0000 UTC m=+2.828870165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501839 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501866 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50184916 +0000 UTC m=+2.828911676 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501878 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501902 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501896232 +0000 UTC m=+2.828958748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501901 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501943 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501956 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.501976 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.501988 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.501979304 +0000 UTC m=+2.829041810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.502028 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502047 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502072 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502081 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502071177 +0000 UTC m=+2.829133703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502107 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502110 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502100947 +0000 UTC m=+2.829163463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.502107 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502139 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502130778 +0000 UTC m=+2.829193474 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: E1203 14:26:09.502140 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:09.503494 master-0 kubenswrapper[4409]: I1203 14:26:09.502161 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502171 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502163759 +0000 UTC m=+2.829226285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502208 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502241 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502239 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502270 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502272 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502263982 +0000 UTC m=+2.829326488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502281 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502352 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502362 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502351645 +0000 UTC m=+2.829414351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502384 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502414 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502420 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502434 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502425127 +0000 UTC m=+2.829487843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502453 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502481 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502472768 +0000 UTC m=+2.829535264 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502487 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502500 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502530 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502520249 +0000 UTC m=+2.829582765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502544 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502569 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502561861 +0000 UTC m=+2.829624457 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502590 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502624 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502653 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502654 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502664 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502650633 +0000 UTC m=+2.829713149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502690 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502690 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502725 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502737 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502729945 +0000 UTC m=+2.829792451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502763 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502749886 +0000 UTC m=+2.829812402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502764 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502791 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502783467 +0000 UTC m=+2.829845983 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502812 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502806177 +0000 UTC m=+2.829868683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502829 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502851 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502878 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502898 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502935 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502942 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502950 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502950 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502964 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502972 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.502992 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.502997 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503021 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.502993253 +0000 UTC m=+2.830055779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503033 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503040 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503032294 +0000 UTC m=+2.830095010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503047 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503055 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503048104 +0000 UTC m=+2.830110610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503055 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503072 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503066255 +0000 UTC m=+2.830128761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503094 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503123 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503149 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503171 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503217 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503206569 +0000 UTC m=+2.830269135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503176 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503280 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503295 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503302 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503330 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503303 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503343 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503330502 +0000 UTC m=+2.830393028 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503367 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503373 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503378 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503370633 +0000 UTC m=+2.830433149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503338 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503402 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503394454 +0000 UTC m=+2.830456980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503407 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503421 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503411955 +0000 UTC m=+2.830474471 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503454 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503439825 +0000 UTC m=+2.830502351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503481 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503523 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503542 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503583 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503556519 +0000 UTC m=+2.830619205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503603 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503625 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50361396 +0000 UTC m=+2.830676486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503648 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503639111 +0000 UTC m=+2.830701637 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503669 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503661232 +0000 UTC m=+2.830723758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503691 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503777 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503801 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503850 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.503807 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503868 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503897 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503851 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503841467 +0000 UTC m=+2.830903993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503954 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50394025 +0000 UTC m=+2.831002776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.503985 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.503976961 +0000 UTC m=+2.831039487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504034 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504025922 +0000 UTC m=+2.831088438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504073 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504125 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504162 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504180 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504199 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504189987 +0000 UTC m=+2.831252493 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504232 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504249 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504240138 +0000 UTC m=+2.831302834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504274 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504265769 +0000 UTC m=+2.831328485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504294 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504283249 +0000 UTC m=+2.831345965 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504319 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504356 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504364 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504351551 +0000 UTC m=+2.831414087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504402 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504388332 +0000 UTC m=+2.831450868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504229 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504477 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504532 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504595 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504617 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504655 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504667 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50465628 +0000 UTC m=+2.831718996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504716 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504775 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: E1203 14:26:09.504723 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:09.514756 master-0 kubenswrapper[4409]: I1203 14:26:09.504859 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.504887 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504873256 +0000 UTC m=+2.831935952 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.504918 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.504951 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.504958 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.504981 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.504768 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505039 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.504993059 +0000 UTC m=+2.832055595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505056 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505059 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505069 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505079 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505092 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505081292 +0000 UTC m=+2.832143998 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.504799 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505109 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505101923 +0000 UTC m=+2.832164669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505108 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.504843 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505126 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505118663 +0000 UTC m=+2.832181399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505154 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505160 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505150704 +0000 UTC m=+2.832213410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505190 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505181275 +0000 UTC m=+2.832244011 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505207 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505198665 +0000 UTC m=+2.832261381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505159 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505221 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505213856 +0000 UTC m=+2.832276572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505223 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505242 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505264 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505253107 +0000 UTC m=+2.832315613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505302 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505309 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505329 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505320659 +0000 UTC m=+2.832383355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505353 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505367 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505391 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50537103 +0000 UTC m=+2.832433546 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505411 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505404081 +0000 UTC m=+2.832466797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505440 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505492 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505538 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505526475 +0000 UTC m=+2.832588991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505570 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.505701 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.505676719 +0000 UTC m=+2.832739255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.505963 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506030 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506056 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506077 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506098 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506122 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506136 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506145 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506160 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506164 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506154202 +0000 UTC m=+2.833216728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506210 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506214 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506221 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506189 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506269 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506236 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506227444 +0000 UTC m=+2.833290051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506414 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506420 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506443 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.50643414 +0000 UTC m=+2.833496646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506483 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506542 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506529883 +0000 UTC m=+2.833592389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506541 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506564 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506556594 +0000 UTC m=+2.833619330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506580 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506572344 +0000 UTC m=+2.833635100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506604 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506594525 +0000 UTC m=+2.833657271 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506600 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506616 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506645 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506651 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506641456 +0000 UTC m=+2.833703972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506710 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506718 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506696498 +0000 UTC m=+2.833759044 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506729 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506743 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506755 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506742469 +0000 UTC m=+2.833805005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506790 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506809 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506842 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506863 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506851392 +0000 UTC m=+2.833913908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506886 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.506877893 +0000 UTC m=+2.833940419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506933 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506936 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.506997 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507036 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507026847 +0000 UTC m=+2.834089353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.506991 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507049 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507072 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507086 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507076679 +0000 UTC m=+2.834139195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507128 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507131 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507163 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507205 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507227 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507243 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507232423 +0000 UTC m=+2.834294929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507257 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507270 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507302 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507309 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507298195 +0000 UTC m=+2.834360881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507306 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507337 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507323746 +0000 UTC m=+2.834386472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507358 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507348806 +0000 UTC m=+2.834411502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507372 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507374 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507365677 +0000 UTC m=+2.834428403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507442 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507426628 +0000 UTC m=+2.834489174 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507475 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507526 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507538 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507573 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507554 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507540692 +0000 UTC m=+2.834603418 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507631 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507609614 +0000 UTC m=+2.834672160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507704 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507742 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507775 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507783 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507773268 +0000 UTC m=+2.834835964 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507834 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.5078205 +0000 UTC m=+2.834883046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507876 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507900 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: I1203 14:26:09.507924 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507931 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507923523 +0000 UTC m=+2.834986029 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:09.526697 master-0 kubenswrapper[4409]: E1203 14:26:09.507958 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.507964 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508030 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508072 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508625 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.507994365 +0000 UTC m=+2.835057081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508672 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508709 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508722 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508742 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508761 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508739606 +0000 UTC m=+2.835802142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508787 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508801 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508807 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508793347 +0000 UTC m=+2.835855893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508834 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508823288 +0000 UTC m=+2.835886004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508857 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508850759 +0000 UTC m=+2.835913485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508878 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508899 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508934 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508950 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508936831 +0000 UTC m=+2.835999367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.508938 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508969 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508979 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508967642 +0000 UTC m=+2.836030178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.508995 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509028 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509016023 +0000 UTC m=+2.836078599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509059 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509099 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509115 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509103326 +0000 UTC m=+2.836165832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509120 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509142 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509160 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509167 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509152767 +0000 UTC m=+2.836215293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509189 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509195 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509186128 +0000 UTC m=+2.836248824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509237 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509281 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509310 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509339 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509318142 +0000 UTC m=+2.836380678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509362 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509392 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509384414 +0000 UTC m=+2.836447110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509394 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509399 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509434 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509422795 +0000 UTC m=+2.836485521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509389 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509452 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509443556 +0000 UTC m=+2.836506282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509477 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509487 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509509 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509533 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509547 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509527838 +0000 UTC m=+2.836590524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509561 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509570 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509561179 +0000 UTC m=+2.836623885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509598 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509630 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509641 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509662 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509702 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509676942 +0000 UTC m=+2.836739558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509740 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509755 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509743044 +0000 UTC m=+2.836805560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509779 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509769095 +0000 UTC m=+2.836831791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509742 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509803 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509816 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509808206 +0000 UTC m=+2.836870902 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509840 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509879 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509896 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.509928 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509948 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509969 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.509948 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.509932429 +0000 UTC m=+2.836995105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510018 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510029 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510054 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510105 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510089824 +0000 UTC m=+2.837152440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510134 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510140 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510132095 +0000 UTC m=+2.837194611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510158 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510150636 +0000 UTC m=+2.837213152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510185 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510233 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510290 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510280639 +0000 UTC m=+2.837343145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510290 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510325 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510343 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510364 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510344611 +0000 UTC m=+2.837407157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510404 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510422 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510447 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510434234 +0000 UTC m=+2.837496930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510368 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510481 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510498 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510487375 +0000 UTC m=+2.837550071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.510535 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510575 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510596 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510585698 +0000 UTC m=+2.837648224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510624 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510640 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510620679 +0000 UTC m=+2.837683345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510665 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.51065376 +0000 UTC m=+2.837716266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510701 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: E1203 14:26:09.510733 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510723942 +0000 UTC m=+2.837786448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:09.532364 master-0 kubenswrapper[4409]: I1203 14:26:09.530781 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpt5\" (UniqueName: \"kubernetes.io/projected/e97e1725-cb55-4ce3-952d-a4fd0731577d-kube-api-access-9hpt5\") pod \"network-operator-6cbf58c977-8lh6n\" (UID: \"e97e1725-cb55-4ce3-952d-a4fd0731577d\") " pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" Dec 03 14:26:09.613740 master-0 kubenswrapper[4409]: I1203 14:26:09.613603 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhw8\" (UniqueName: \"kubernetes.io/projected/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-kube-api-access-xhhw8\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:09.613825 master-0 kubenswrapper[4409]: E1203 14:26:09.613765 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.613876 master-0 kubenswrapper[4409]: E1203 14:26:09.613839 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.613876 master-0 kubenswrapper[4409]: E1203 14:26:09.613855 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.613949 master-0 kubenswrapper[4409]: E1203 14:26:09.613910 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.113892708 +0000 UTC m=+2.440955264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.614123 master-0 kubenswrapper[4409]: E1203 14:26:09.614079 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:09.614123 master-0 kubenswrapper[4409]: E1203 14:26:09.614121 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.614191 master-0 kubenswrapper[4409]: E1203 14:26:09.614135 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.614191 master-0 kubenswrapper[4409]: E1203 14:26:09.614189 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.114171166 +0000 UTC m=+2.441233752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.614807 master-0 kubenswrapper[4409]: E1203 14:26:09.614775 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.614807 master-0 kubenswrapper[4409]: E1203 14:26:09.614797 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.614807 master-0 kubenswrapper[4409]: E1203 14:26:09.614808 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.615029 master-0 kubenswrapper[4409]: E1203 14:26:09.614837 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.114829294 +0000 UTC m=+2.441891890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.615029 master-0 kubenswrapper[4409]: I1203 14:26:09.614873 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dv7j\" (UniqueName: \"kubernetes.io/projected/04e9e2a5-cdc2-42af-ab2c-49525390be6d-kube-api-access-2dv7j\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:09.615696 master-0 kubenswrapper[4409]: I1203 14:26:09.615659 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96q6\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-kube-api-access-z96q6\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:09.616304 master-0 kubenswrapper[4409]: I1203 14:26:09.615832 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955zg\" (UniqueName: \"kubernetes.io/projected/8c6fa89f-268c-477b-9f04-238d2305cc89-kube-api-access-955zg\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:09.616304 master-0 kubenswrapper[4409]: E1203 14:26:09.615968 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.616304 master-0 kubenswrapper[4409]: E1203 14:26:09.615979 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.616304 master-0 kubenswrapper[4409]: E1203 14:26:09.615987 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.616304 master-0 kubenswrapper[4409]: E1203 14:26:09.616033 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.116022598 +0000 UTC m=+2.443085184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617065 master-0 kubenswrapper[4409]: I1203 14:26:09.617049 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6tjl\" (UniqueName: \"kubernetes.io/projected/42c95e54-b4ba-4b19-a97c-abcec840ac5d-kube-api-access-b6tjl\") pod \"node-resolver-4xlhs\" (UID: \"42c95e54-b4ba-4b19-a97c-abcec840ac5d\") " pod="openshift-dns/node-resolver-4xlhs" Dec 03 14:26:09.617208 master-0 kubenswrapper[4409]: I1203 14:26:09.617156 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec89938d-35a5-46ba-8c63-12489db18cbd-kube-api-access\") pod \"cluster-version-operator-7c49fbfc6f-7krqx\" (UID: \"ec89938d-35a5-46ba-8c63-12489db18cbd\") " pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" Dec 03 14:26:09.617278 master-0 kubenswrapper[4409]: E1203 14:26:09.617221 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.617278 master-0 kubenswrapper[4409]: E1203 14:26:09.617243 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.617278 master-0 kubenswrapper[4409]: E1203 14:26:09.617254 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617439 master-0 kubenswrapper[4409]: E1203 14:26:09.617326 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.117309965 +0000 UTC m=+2.444372541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617493 master-0 kubenswrapper[4409]: E1203 14:26:09.617472 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.617529 master-0 kubenswrapper[4409]: E1203 14:26:09.617494 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.617561 master-0 kubenswrapper[4409]: E1203 14:26:09.617529 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617600 master-0 kubenswrapper[4409]: E1203 14:26:09.617571 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.117560512 +0000 UTC m=+2.444623088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617824 master-0 kubenswrapper[4409]: E1203 14:26:09.617796 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:09.617824 master-0 kubenswrapper[4409]: E1203 14:26:09.617819 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.617906 master-0 kubenswrapper[4409]: E1203 14:26:09.617829 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617906 master-0 kubenswrapper[4409]: E1203 14:26:09.617889 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.117878621 +0000 UTC m=+2.444941217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.617977 master-0 kubenswrapper[4409]: I1203 14:26:09.617953 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsjls\" (UniqueName: \"kubernetes.io/projected/a9b62b2f-1e7a-4f1b-a988-4355d93dda46-kube-api-access-gsjls\") pod \"machine-approver-cb84b9cdf-qn94w\" (UID: \"a9b62b2f-1e7a-4f1b-a988-4355d93dda46\") " pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" Dec 03 14:26:09.618754 master-0 kubenswrapper[4409]: I1203 14:26:09.618599 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm96f\" (UniqueName: \"kubernetes.io/projected/77430348-b53a-4898-8047-be8bb542a0a7-kube-api-access-wm96f\") pod \"ovnkube-node-txl6b\" (UID: \"77430348-b53a-4898-8047-be8bb542a0a7\") " pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:09.618754 master-0 kubenswrapper[4409]: E1203 14:26:09.618691 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.618754 master-0 kubenswrapper[4409]: I1203 14:26:09.618699 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5fn5\" (UniqueName: \"kubernetes.io/projected/c777c9de-1ace-46be-b5c2-c71d252f53f4-kube-api-access-k5fn5\") pod \"multus-kk4tm\" (UID: \"c777c9de-1ace-46be-b5c2-c71d252f53f4\") " pod="openshift-multus/multus-kk4tm" Dec 03 14:26:09.618754 master-0 kubenswrapper[4409]: E1203 14:26:09.618708 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.619984 master-0 kubenswrapper[4409]: E1203 14:26:09.618771 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.619984 master-0 kubenswrapper[4409]: E1203 14:26:09.618816 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.118806337 +0000 UTC m=+2.445868913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.677547 master-0 kubenswrapper[4409]: E1203 14:26:09.677515 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:09.677547 master-0 kubenswrapper[4409]: E1203 14:26:09.677551 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677566 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677658 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.177610075 +0000 UTC m=+2.504672581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677719 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677732 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677741 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.677817 master-0 kubenswrapper[4409]: E1203 14:26:09.677773 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.177765299 +0000 UTC m=+2.504827805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.678664 master-0 kubenswrapper[4409]: I1203 14:26:09.678630 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4w9\" (UniqueName: \"kubernetes.io/projected/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-kube-api-access-mq4w9\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:09.678806 master-0 kubenswrapper[4409]: I1203 14:26:09.678727 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:09.718983 master-0 kubenswrapper[4409]: I1203 14:26:09.718919 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:09.719340 master-0 kubenswrapper[4409]: E1203 14:26:09.719115 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:09.719340 master-0 kubenswrapper[4409]: E1203 14:26:09.719138 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.719340 master-0 kubenswrapper[4409]: E1203 14:26:09.719154 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.719639 master-0 kubenswrapper[4409]: E1203 14:26:09.719342 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.719271586 +0000 UTC m=+3.046334092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746188 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746227 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746245 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746270 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746307 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746315 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.246295442 +0000 UTC m=+2.573357948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746327 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746421 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.246396565 +0000 UTC m=+2.573459121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746442 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746470 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746484 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.747162 master-0 kubenswrapper[4409]: E1203 14:26:09.746541 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.246524489 +0000 UTC m=+2.573586995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.758820 master-0 kubenswrapper[4409]: I1203 14:26:09.758771 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpnb\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-kube-api-access-cjpnb\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:09.790380 master-0 kubenswrapper[4409]: I1203 14:26:09.790341 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:09.798532 master-0 kubenswrapper[4409]: I1203 14:26:09.798493 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:09.823371 master-0 kubenswrapper[4409]: I1203 14:26:09.823315 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:09.823371 master-0 kubenswrapper[4409]: I1203 14:26:09.823375 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:09.823611 master-0 kubenswrapper[4409]: I1203 14:26:09.823427 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:09.823611 master-0 kubenswrapper[4409]: E1203 14:26:09.823530 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.823611 master-0 kubenswrapper[4409]: E1203 14:26:09.823570 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.823611 master-0 kubenswrapper[4409]: E1203 14:26:09.823587 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.823611 master-0 kubenswrapper[4409]: I1203 14:26:09.823604 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823608 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823659 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.823635516 +0000 UTC m=+3.150698022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823673 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823691 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823670 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823704 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823739 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.823727978 +0000 UTC m=+3.150790484 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.823774 master-0 kubenswrapper[4409]: E1203 14:26:09.823768 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.823746479 +0000 UTC m=+3.150809165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823790 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823804 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823813 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: I1203 14:26:09.823835 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823840 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.823831781 +0000 UTC m=+3.150894287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: I1203 14:26:09.823912 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823982 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: E1203 14:26:09.823994 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824060 master-0 kubenswrapper[4409]: I1203 14:26:09.824041 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:09.824325 master-0 kubenswrapper[4409]: I1203 14:26:09.824094 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:09.824325 master-0 kubenswrapper[4409]: I1203 14:26:09.824123 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:09.824325 master-0 kubenswrapper[4409]: I1203 14:26:09.824244 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:09.824325 master-0 kubenswrapper[4409]: E1203 14:26:09.824262 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824253433 +0000 UTC m=+3.151315939 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: I1203 14:26:09.824365 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824389 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824409 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824419 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824391 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824445 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824453 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824454 master-0 kubenswrapper[4409]: E1203 14:26:09.824463 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: I1203 14:26:09.824446 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824463 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824499 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824462 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824452379 +0000 UTC m=+3.151514885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824519 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824531 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824542 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824551 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824563 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824529 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824519881 +0000 UTC m=+3.151582387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824571 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824578 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824587 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824599 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824593453 +0000 UTC m=+3.151655959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: I1203 14:26:09.824596 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824619 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824608823 +0000 UTC m=+3.151671549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824513 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824638 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824631314 +0000 UTC m=+3.151694040 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824641 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824652 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824654 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824672 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824681 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.824683 master-0 kubenswrapper[4409]: E1203 14:26:09.824653 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.824646724 +0000 UTC m=+3.151709460 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: E1203 14:26:09.824888 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.82483515 +0000 UTC m=+3.151897666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: I1203 14:26:09.825063 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: I1203 14:26:09.825105 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: I1203 14:26:09.825203 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: I1203 14:26:09.825290 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:09.825423 master-0 kubenswrapper[4409]: I1203 14:26:09.825334 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:09.825791 master-0 kubenswrapper[4409]: I1203 14:26:09.825422 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:09.825791 master-0 kubenswrapper[4409]: I1203 14:26:09.825458 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:09.825791 master-0 kubenswrapper[4409]: I1203 14:26:09.825585 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:09.826482 master-0 kubenswrapper[4409]: E1203 14:26:09.826424 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.826531 master-0 kubenswrapper[4409]: E1203 14:26:09.826508 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.826531 master-0 kubenswrapper[4409]: E1203 14:26:09.826524 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.826595 master-0 kubenswrapper[4409]: E1203 14:26:09.826563 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.826553838 +0000 UTC m=+3.153616344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.826677 master-0 kubenswrapper[4409]: E1203 14:26:09.826657 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:09.826722 master-0 kubenswrapper[4409]: E1203 14:26:09.826683 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.826722 master-0 kubenswrapper[4409]: E1203 14:26:09.826694 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.826776 master-0 kubenswrapper[4409]: E1203 14:26:09.826719 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:09.826776 master-0 kubenswrapper[4409]: E1203 14:26:09.826735 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.826776 master-0 kubenswrapper[4409]: I1203 14:26:09.826661 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:09.826776 master-0 kubenswrapper[4409]: E1203 14:26:09.826751 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.826776 master-0 kubenswrapper[4409]: E1203 14:26:09.826739 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.826729983 +0000 UTC m=+3.153792489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.826915 master-0 kubenswrapper[4409]: E1203 14:26:09.826794 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:09.826915 master-0 kubenswrapper[4409]: I1203 14:26:09.826807 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:09.826915 master-0 kubenswrapper[4409]: I1203 14:26:09.826857 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:09.826915 master-0 kubenswrapper[4409]: I1203 14:26:09.826905 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:09.827099 master-0 kubenswrapper[4409]: E1203 14:26:09.826813 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827099 master-0 kubenswrapper[4409]: E1203 14:26:09.827094 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.826850 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827127 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827135 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827141 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827146 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827151 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.826891 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827174 master-0 kubenswrapper[4409]: E1203 14:26:09.827178 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.826932 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.827188 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.827197 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.827206 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.826968 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.82696036 +0000 UTC m=+3.154022866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.827024 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: I1203 14:26:09.827255 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: I1203 14:26:09.827324 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:09.827397 master-0 kubenswrapper[4409]: E1203 14:26:09.827264 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827419 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827032 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827441 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827456 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827462 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827465 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827475 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827073 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827514 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827076 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827525 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827534 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827543 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827362 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827571 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827399 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827385512 +0000 UTC m=+3.154448018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827580 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827617 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827596678 +0000 UTC m=+3.154659184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827636 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827629069 +0000 UTC m=+3.154691575 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.827638 master-0 kubenswrapper[4409]: E1203 14:26:09.827653 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827646329 +0000 UTC m=+3.154708835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827668 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.82766171 +0000 UTC m=+3.154724216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827684 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.82767699 +0000 UTC m=+3.154739496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: I1203 14:26:09.827795 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827815 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827796544 +0000 UTC m=+3.154859050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827837 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827827245 +0000 UTC m=+3.154889751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827854 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827846875 +0000 UTC m=+3.154909381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827869 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827861336 +0000 UTC m=+3.154923832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827882 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827875916 +0000 UTC m=+3.154938422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827885 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827897 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827899 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827894526 +0000 UTC m=+3.154957032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827906 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.827938 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.827931148 +0000 UTC m=+3.154993654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: I1203 14:26:09.828018 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: I1203 14:26:09.828043 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828083 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828102 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: I1203 14:26:09.828102 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828110 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828180 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828199 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828208 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828213 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.828205615 +0000 UTC m=+3.155268121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828238 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.828230126 +0000 UTC m=+3.155292632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828224 master-0 kubenswrapper[4409]: E1203 14:26:09.828182 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:09.828933 master-0 kubenswrapper[4409]: E1203 14:26:09.828265 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:09.828933 master-0 kubenswrapper[4409]: E1203 14:26:09.828275 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.828933 master-0 kubenswrapper[4409]: E1203 14:26:09.828308 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.828302138 +0000 UTC m=+3.155364644 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:09.882165 master-0 kubenswrapper[4409]: I1203 14:26:09.881855 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"d724802f2b9fda1d6f83046fb2d991d1e2565807167900e0a9654642e6c1ff2c"} Dec 03 14:26:09.886534 master-0 kubenswrapper[4409]: I1203 14:26:09.886448 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"026b31c39360f96fe28d20d8e43c055099ab4803be9fb78e3ebfb58db22a48ca"} Dec 03 14:26:09.889745 master-0 kubenswrapper[4409]: I1203 14:26:09.889701 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4p4zh" event={"ID":"d7d6a05e-beee-40e9-b376-5c22e285b27a","Type":"ContainerStarted","Data":"bbe587b51102a3f1d389a061187cb287226a56069181ea19c5f548eb14b132ad"} Dec 03 14:26:09.891413 master-0 kubenswrapper[4409]: I1203 14:26:09.891351 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"89750da41eecfb45e5b3bf388dec53de1d825fdd90f985ad3bf6cef27e55d427"} Dec 03 14:26:09.894664 master-0 kubenswrapper[4409]: I1203 14:26:09.894376 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"0e6e398c6d9f86dbc6315ea7ffcaf85013ca4f8b4b0bebde5e8abceda9b06f83"} Dec 03 14:26:09.899190 master-0 kubenswrapper[4409]: I1203 14:26:09.899143 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:09.899190 master-0 kubenswrapper[4409]: I1203 14:26:09.899191 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:09.899588 master-0 kubenswrapper[4409]: I1203 14:26:09.899214 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-pvrfs" event={"ID":"eecc43f5-708f-4395-98cc-696b243d6321","Type":"ContainerStarted","Data":"81df026c9d8442cf0f608fcaa550f09b444748470a2328bc05d1dae32ccc94cc"} Dec 03 14:26:09.993033 master-0 kubenswrapper[4409]: E1203 14:26:09.992867 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:09.993033 master-0 kubenswrapper[4409]: E1203 14:26:09.992974 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:09.993195 master-0 kubenswrapper[4409]: E1203 14:26:09.993056 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.49303325 +0000 UTC m=+2.820095756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.007275 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: I1203 14:26:10.007584 4409 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c180b512-bf0c-4ddc-a5cf-f04acc830a61\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-snapshot-controller-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-operator-7b795784b8-44frm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: I1203 14:26:10.008180 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnrdd\" (UniqueName: \"kubernetes.io/projected/6b681889-eb2c-41fb-a1dc-69b99227b45b-kube-api-access-hnrdd\") pod \"cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq\" (UID: \"6b681889-eb2c-41fb-a1dc-69b99227b45b\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.008366 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.008395 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.008466 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.508439776 +0000 UTC m=+2.835502282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.009958 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.009979 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.009992 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.010083 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.510057422 +0000 UTC m=+2.837119928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.011935 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.011954 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.015743 master-0 kubenswrapper[4409]: E1203 14:26:10.011996 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:10.511986937 +0000 UTC m=+2.839049443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.084226 master-0 kubenswrapper[4409]: I1203 14:26:10.084148 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:10.125598 master-0 kubenswrapper[4409]: I1203 14:26:10.125522 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:10.137658 master-0 kubenswrapper[4409]: I1203 14:26:10.137591 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:10.137658 master-0 kubenswrapper[4409]: I1203 14:26:10.137649 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:10.137921 master-0 kubenswrapper[4409]: I1203 14:26:10.137814 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:10.137921 master-0 kubenswrapper[4409]: I1203 14:26:10.137880 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:10.137992 master-0 kubenswrapper[4409]: E1203 14:26:10.137886 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:10.137992 master-0 kubenswrapper[4409]: E1203 14:26:10.137946 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.137992 master-0 kubenswrapper[4409]: E1203 14:26:10.137960 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138110 master-0 kubenswrapper[4409]: I1203 14:26:10.137920 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:10.138142 master-0 kubenswrapper[4409]: E1203 14:26:10.138047 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:10.138179 master-0 kubenswrapper[4409]: E1203 14:26:10.138148 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:10.138212 master-0 kubenswrapper[4409]: E1203 14:26:10.138182 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.138212 master-0 kubenswrapper[4409]: E1203 14:26:10.138209 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138282 master-0 kubenswrapper[4409]: E1203 14:26:10.138236 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.138318 master-0 kubenswrapper[4409]: E1203 14:26:10.138298 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.138354 master-0 kubenswrapper[4409]: E1203 14:26:10.138323 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138389 master-0 kubenswrapper[4409]: E1203 14:26:10.138146 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.138426 master-0 kubenswrapper[4409]: E1203 14:26:10.138393 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.138426 master-0 kubenswrapper[4409]: E1203 14:26:10.138412 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138492 master-0 kubenswrapper[4409]: E1203 14:26:10.138190 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.138492 master-0 kubenswrapper[4409]: E1203 14:26:10.138475 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138553 master-0 kubenswrapper[4409]: E1203 14:26:10.138056 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138035452 +0000 UTC m=+3.465097958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138726 master-0 kubenswrapper[4409]: I1203 14:26:10.138689 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:10.138767 master-0 kubenswrapper[4409]: E1203 14:26:10.138723 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138701 +0000 UTC m=+3.465763506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138807 master-0 kubenswrapper[4409]: E1203 14:26:10.138766 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138758142 +0000 UTC m=+3.465820648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138807 master-0 kubenswrapper[4409]: E1203 14:26:10.138783 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:10.138807 master-0 kubenswrapper[4409]: E1203 14:26:10.138786 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138779133 +0000 UTC m=+3.465841639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138807 master-0 kubenswrapper[4409]: E1203 14:26:10.138800 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.138807 master-0 kubenswrapper[4409]: E1203 14:26:10.138809 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138964 master-0 kubenswrapper[4409]: E1203 14:26:10.138814 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138807683 +0000 UTC m=+3.465870189 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.138964 master-0 kubenswrapper[4409]: I1203 14:26:10.138867 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:10.138964 master-0 kubenswrapper[4409]: E1203 14:26:10.138903 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.138890696 +0000 UTC m=+3.465953222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.139088 master-0 kubenswrapper[4409]: E1203 14:26:10.138974 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:10.139088 master-0 kubenswrapper[4409]: E1203 14:26:10.139000 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.139088 master-0 kubenswrapper[4409]: E1203 14:26:10.139039 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.139088 master-0 kubenswrapper[4409]: E1203 14:26:10.139078 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.139220 master-0 kubenswrapper[4409]: I1203 14:26:10.139021 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:10.139220 master-0 kubenswrapper[4409]: E1203 14:26:10.139095 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.139220 master-0 kubenswrapper[4409]: E1203 14:26:10.139167 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.139220 master-0 kubenswrapper[4409]: E1203 14:26:10.139137 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.139094992 +0000 UTC m=+3.466157638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.139522 master-0 kubenswrapper[4409]: E1203 14:26:10.139492 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.139473242 +0000 UTC m=+3.466535868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.229723 master-0 kubenswrapper[4409]: I1203 14:26:10.229561 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:10.245473 master-0 kubenswrapper[4409]: I1203 14:26:10.245405 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.245473 master-0 kubenswrapper[4409]: I1203 14:26:10.245472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:10.245749 master-0 kubenswrapper[4409]: E1203 14:26:10.245717 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:10.245787 master-0 kubenswrapper[4409]: E1203 14:26:10.245760 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.245787 master-0 kubenswrapper[4409]: E1203 14:26:10.245780 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.245983 master-0 kubenswrapper[4409]: E1203 14:26:10.245936 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:10.245983 master-0 kubenswrapper[4409]: E1203 14:26:10.245978 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.245936561 +0000 UTC m=+3.572999227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.246343 master-0 kubenswrapper[4409]: E1203 14:26:10.245983 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.246343 master-0 kubenswrapper[4409]: E1203 14:26:10.246050 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.246343 master-0 kubenswrapper[4409]: E1203 14:26:10.246135 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.246111506 +0000 UTC m=+3.573174172 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.351998 master-0 kubenswrapper[4409]: I1203 14:26:10.351671 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:10.351998 master-0 kubenswrapper[4409]: I1203 14:26:10.351807 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:10.351998 master-0 kubenswrapper[4409]: I1203 14:26:10.351872 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352073 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352134 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352155 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352165 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352188 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352201 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352274 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.352241646 +0000 UTC m=+3.679304332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352299 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.352291697 +0000 UTC m=+3.679354393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352294 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352359 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352382 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.352663 master-0 kubenswrapper[4409]: E1203 14:26:10.352510 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.352470972 +0000 UTC m=+3.679533638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.418534 master-0 kubenswrapper[4409]: I1203 14:26:10.418456 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:10.418534 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:10.418534 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:10.418534 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:10.418876 master-0 kubenswrapper[4409]: I1203 14:26:10.418546 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: I1203 14:26:10.557428 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: I1203 14:26:10.557502 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: I1203 14:26:10.557544 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: I1203 14:26:10.557568 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: E1203 14:26:10.557580 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: I1203 14:26:10.557591 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.557745 master-0 kubenswrapper[4409]: E1203 14:26:10.557686 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.557768 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.557770 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.557725 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.557699 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.557673011 +0000 UTC m=+4.884735537 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558022 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558040 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558021701 +0000 UTC m=+4.885084207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558062 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558055612 +0000 UTC m=+4.885118118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558076 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558068662 +0000 UTC m=+4.885131168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558092 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558084822 +0000 UTC m=+4.885147328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: E1203 14:26:10.558106 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558098163 +0000 UTC m=+4.885160659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: I1203 14:26:10.557999 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: I1203 14:26:10.558139 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.558173 master-0 kubenswrapper[4409]: I1203 14:26:10.558164 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558228 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558240 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558278 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558259127 +0000 UTC m=+4.885321633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558300 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558311 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558317 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558344 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558333659 +0000 UTC m=+4.885396185 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558308 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558398 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558399 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558421 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558445 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558438362 +0000 UTC m=+4.885500868 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558477 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558454143 +0000 UTC m=+4.885516749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558516 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558585 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558612 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: I1203 14:26:10.558619 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558651 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558640468 +0000 UTC m=+4.885702984 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:10.558645 master-0 kubenswrapper[4409]: E1203 14:26:10.558671 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558662779 +0000 UTC m=+4.885725295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.558698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558709 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558712 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.558735 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558740 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558730861 +0000 UTC m=+4.885793367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558774 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558766052 +0000 UTC m=+4.885828578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558784 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558792 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558783482 +0000 UTC m=+4.885845998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558792 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558835 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558824903 +0000 UTC m=+4.885887529 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.558860 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.558901 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558918 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.558928 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558941 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558935746 +0000 UTC m=+4.885998252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558972 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558976 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558962417 +0000 UTC m=+4.886025083 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.558984 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559020 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559026 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.558999208 +0000 UTC m=+4.886061714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559069 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.55905977 +0000 UTC m=+4.886122386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559089 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559121 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559142 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559136262 +0000 UTC m=+4.886198768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559121 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559153 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559181 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559214 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559220 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559220 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559235 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559245 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559238685 +0000 UTC m=+4.886301291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: I1203 14:26:10.559258 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559266 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559257036 +0000 UTC m=+4.886319552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559283 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559285 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559276806 +0000 UTC m=+4.886339322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559291 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:10.559259 master-0 kubenswrapper[4409]: E1203 14:26:10.559303 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559298307 +0000 UTC m=+4.886360813 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559303 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559323 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559344 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559332518 +0000 UTC m=+4.886395144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559347 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559374 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559380 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559373649 +0000 UTC m=+4.886436165 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559421 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559439 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559432181 +0000 UTC m=+4.886494687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559442 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559453 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559466 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559480 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559472402 +0000 UTC m=+4.886534908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559496 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559489472 +0000 UTC m=+4.886551978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559513 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559516 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559533 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559527723 +0000 UTC m=+4.886590339 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559559 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559584 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559578495 +0000 UTC m=+4.886640991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559587 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559609 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559604525 +0000 UTC m=+4.886667031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559558 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559648 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559704 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559733 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559757 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559759 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559804 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559820 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559831 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559819362 +0000 UTC m=+4.886881888 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559762 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559848 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559849 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559842682 +0000 UTC m=+4.886905188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559858 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.559812 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559867 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559861423 +0000 UTC m=+4.886923929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559917 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559902984 +0000 UTC m=+4.886965590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559957 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559925425 +0000 UTC m=+4.886988061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.559986 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.559977006 +0000 UTC m=+4.887039622 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560035 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560076 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560105 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560169 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560193 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560205 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560236 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560260 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560271 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560273 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560293 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560302 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560323 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560336 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560345 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560323 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560304 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560290065 +0000 UTC m=+4.887352651 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560284 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560404 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560424 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560416438 +0000 UTC m=+4.887478944 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560448 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560464 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560472 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560464 +0000 UTC m=+4.887526506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560491 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560514 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560522 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560512241 +0000 UTC m=+4.887574747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560542 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560532462 +0000 UTC m=+4.887594968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560553 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560547892 +0000 UTC m=+4.887610398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560565 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560572 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560588 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560581613 +0000 UTC m=+4.887644109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560603 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560596014 +0000 UTC m=+4.887658510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560610 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560619 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560612914 +0000 UTC m=+4.887675420 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560723 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560754 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560746378 +0000 UTC m=+4.887808884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560768 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560762578 +0000 UTC m=+4.887825074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560793 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560796 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: I1203 14:26:10.560849 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560875 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560855461 +0000 UTC m=+4.887918007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:10.560816 master-0 kubenswrapper[4409]: E1203 14:26:10.560880 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.560917 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.560948 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560939963 +0000 UTC m=+4.888002549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.560915 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.560964 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560955854 +0000 UTC m=+4.888018360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.560966 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.560983 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.560994 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.560987875 +0000 UTC m=+4.888050381 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561066 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561088 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561107 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561136 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561152 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561115 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561167 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56115729 +0000 UTC m=+4.888219796 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561169 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561281 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561273083 +0000 UTC m=+4.888335589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561225 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561294 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561311 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561305474 +0000 UTC m=+4.888368100 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561342 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561368 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561349745 +0000 UTC m=+4.888412371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561395 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561382546 +0000 UTC m=+4.888445092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561448 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561482 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561495 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561510 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561503739 +0000 UTC m=+4.888566245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561538 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561585 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561604 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561608 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561600112 +0000 UTC m=+4.888662618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561583 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561624 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561618963 +0000 UTC m=+4.888681469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561633 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561636 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561631713 +0000 UTC m=+4.888694219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561701 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561738 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561718975 +0000 UTC m=+4.888781481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561750 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561792 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561780817 +0000 UTC m=+4.888843323 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561820 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561878 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561894 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561935 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.561900 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561935 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.561925601 +0000 UTC m=+4.888988107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561972 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.561983 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562035 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562062 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562051755 +0000 UTC m=+4.889114261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562089 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562071175 +0000 UTC m=+4.889133711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562115 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562129 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562152 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562142127 +0000 UTC m=+4.889204713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562167 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562198 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562207 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562197969 +0000 UTC m=+4.889260475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562248 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56223605 +0000 UTC m=+4.889298586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562266 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562278 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562301 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562291712 +0000 UTC m=+4.889354378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562430 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562471 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562461737 +0000 UTC m=+4.889524243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562470 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562509 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562520 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562541 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562528048 +0000 UTC m=+4.889590554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562566 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562595 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562605 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562642 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562629051 +0000 UTC m=+4.889691587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562653 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562667 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562675 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562681 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562695 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562687423 +0000 UTC m=+4.889749929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562736 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562767 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562760745 +0000 UTC m=+4.889823251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562810 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.562854 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562913 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562941 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.562979 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.562883158 +0000 UTC m=+4.889945674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563246 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563286 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563317 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563310661 +0000 UTC m=+4.890373167 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563363 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563337661 +0000 UTC m=+4.890400177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563403 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563388913 +0000 UTC m=+4.890451699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563480 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563504 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563527 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563569 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563580 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563592 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563604 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563598249 +0000 UTC m=+4.890660755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563618 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563642 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563644 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563668 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563688 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563690 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563706 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563722 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563710712 +0000 UTC m=+4.890773298 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563695 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563727 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563771 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563718 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563741 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563734523 +0000 UTC m=+4.890797029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563863 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563848316 +0000 UTC m=+4.890910862 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563753 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563892 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563876967 +0000 UTC m=+4.890939513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.563926 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.563914068 +0000 UTC m=+4.890976574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563949 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563977 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.563999 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: I1203 14:26:10.564043 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.564066 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564051572 +0000 UTC m=+4.891114108 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:10.565259 master-0 kubenswrapper[4409]: E1203 14:26:10.564092 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564095 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564083873 +0000 UTC m=+4.891146409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564111493 +0000 UTC m=+4.891174029 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564126 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564138 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564166 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564156785 +0000 UTC m=+4.891219281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564193 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564223 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564210786 +0000 UTC m=+4.891273302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564223 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564254 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564290 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564280148 +0000 UTC m=+4.891342654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564314 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564342 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564315069 +0000 UTC m=+4.891377575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564376 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564396 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564391061 +0000 UTC m=+4.891453567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564405 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564465 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564470 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564493 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564478974 +0000 UTC m=+4.891541490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564545 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564562 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564573 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564576 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564568476 +0000 UTC m=+4.891630982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564723 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564774 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564784 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564802 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564791463 +0000 UTC m=+4.891853969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564825 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564817203 +0000 UTC m=+4.891879709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564856 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564885 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564898 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564906 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564910 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564948 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.564936317 +0000 UTC m=+4.891998833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564950 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.564973 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.564981 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565022 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565016079 +0000 UTC m=+4.892078585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565019 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565042 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565034099 +0000 UTC m=+4.892096726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565060 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565068 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565085 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565078821 +0000 UTC m=+4.892141327 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565101 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565094461 +0000 UTC m=+4.892156967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565104 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565118 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565127 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565130 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565147 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565163 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565150513 +0000 UTC m=+4.892213019 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565176 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565180 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565173293 +0000 UTC m=+4.892235799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565184 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565146 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565225 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.565218035 +0000 UTC m=+3.892280661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565287 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565273746 +0000 UTC m=+4.892336422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565309 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565302297 +0000 UTC m=+4.892365063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565338 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565378 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565403 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565435 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565448 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565456 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565469 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565478 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565471462 +0000 UTC m=+4.892533968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565504 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565508 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565518 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565506163 +0000 UTC m=+4.892568859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565531 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565544 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565539154 +0000 UTC m=+4.892601660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565554 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565569 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565571 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565615 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565603896 +0000 UTC m=+4.892666632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565633 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.565625376 +0000 UTC m=+3.892687882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565647 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565694 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565723 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565747 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565757 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565774 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565790 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565799 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565783021 +0000 UTC m=+4.892845557 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565827 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565811612 +0000 UTC m=+4.892874158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565829 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565830 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565870 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.565861003 +0000 UTC m=+4.892923599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.565940 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.565994 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566035 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566028988 +0000 UTC m=+4.893091494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566077 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566082 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566062399 +0000 UTC m=+4.893124915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566133 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566148 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566179 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566167062 +0000 UTC m=+4.893229838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566188 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566230 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566220753 +0000 UTC m=+4.893283509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566223 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566253 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566239624 +0000 UTC m=+4.893302380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566287 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566324 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566316766 +0000 UTC m=+4.893379272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566286 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566360 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566381 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566419 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566431 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566421839 +0000 UTC m=+4.893484355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566452 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566442969 +0000 UTC m=+4.893505485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566466 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566383 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566494 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566485661 +0000 UTC m=+4.893548427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566537 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566591 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566638 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566663 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566682 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566702 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566691926 +0000 UTC m=+4.893754652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566749 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: I1203 14:26:10.566757 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566764 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.571164 master-0 kubenswrapper[4409]: E1203 14:26:10.566820 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566823 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566829 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566847 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566782 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.566793 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566880 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566870041 +0000 UTC m=+4.893932777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566921 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566913493 +0000 UTC m=+4.893975999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566938 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566929703 +0000 UTC m=+4.893992209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566952 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566944824 +0000 UTC m=+4.894007330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.566964 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.566958864 +0000 UTC m=+4.894021370 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567014 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567054 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567064 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567076 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567095 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567087688 +0000 UTC m=+4.894150194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567119 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567132 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567140 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567135149 +0000 UTC m=+4.894197655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567203 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567222 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567235 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567260 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567292 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567270093 +0000 UTC m=+4.894332639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567315 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567323 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567309494 +0000 UTC m=+4.894372030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567261 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567331 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567362 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567343 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567333775 +0000 UTC m=+4.894396281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567450 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567481 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567463058 +0000 UTC m=+4.894525744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567497 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567519 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56750824 +0000 UTC m=+4.894570996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567575 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567615 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567629 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567661 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567649854 +0000 UTC m=+4.894712570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567684 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567672914 +0000 UTC m=+4.894735670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567714 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567720 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567749 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567785 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567753407 +0000 UTC m=+4.894816153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567791 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567815 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567824 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567845 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567835099 +0000 UTC m=+4.894897815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567863 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567864 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567856289 +0000 UTC m=+4.894918795 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567890 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56788329 +0000 UTC m=+4.894945796 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567914 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567941 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567957 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.567985 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.567963 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568041 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568076 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.567997433 +0000 UTC m=+4.895059949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568105 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568093226 +0000 UTC m=+4.895155972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568142 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568171 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568163018 +0000 UTC m=+4.895225514 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568220 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568246 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568273 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568319 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.568304502 +0000 UTC m=+3.895367048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568326 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568271 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568343 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568354 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568347443 +0000 UTC m=+4.895409949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568448 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568490 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568499 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568490207 +0000 UTC m=+4.895552713 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568568 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568526 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568658 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568686 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568670193 +0000 UTC m=+4.895732729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568525 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568731 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568628 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568751 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568740955 +0000 UTC m=+4.895803461 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568794 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568828 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568833 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568824107 +0000 UTC m=+4.895886613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568741 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568875 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568899 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568877298 +0000 UTC m=+4.895939844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.568922 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568912489 +0000 UTC m=+4.895975225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.568952 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569020 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569030 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569039 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.568997312 +0000 UTC m=+4.896060028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569104 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569104 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569119 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569109835 +0000 UTC m=+4.896172331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569154 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569181 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569214 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569179357 +0000 UTC m=+4.896241863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569238 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569248 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569273 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569254219 +0000 UTC m=+4.896316915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569308 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56929781 +0000 UTC m=+4.896360426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569314 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569340 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569349 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569339072 +0000 UTC m=+4.896401808 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569391 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569443 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569395 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569477 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569483 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569501 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569491246 +0000 UTC m=+4.896553982 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569527 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569532 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569539 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569550 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569535747 +0000 UTC m=+4.896598453 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569610 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569600709 +0000 UTC m=+4.896663435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569630 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569653 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.56964055 +0000 UTC m=+4.896703056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569670 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569718 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: I1203 14:26:10.569682 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569679 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:10.575297 master-0 kubenswrapper[4409]: E1203 14:26:10.569858 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:10.585137 master-0 kubenswrapper[4409]: E1203 14:26:10.569745 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569728273 +0000 UTC m=+4.896790809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:10.585137 master-0 kubenswrapper[4409]: E1203 14:26:10.569901 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.569891147 +0000 UTC m=+4.896953653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:10.585137 master-0 kubenswrapper[4409]: E1203 14:26:10.569912 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:11.569907358 +0000 UTC m=+3.896969864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:10.591438 master-0 kubenswrapper[4409]: I1203 14:26:10.590647 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:10.776051 master-0 kubenswrapper[4409]: I1203 14:26:10.775945 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:10.777973 master-0 kubenswrapper[4409]: E1203 14:26:10.776269 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:10.777973 master-0 kubenswrapper[4409]: E1203 14:26:10.776328 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.777973 master-0 kubenswrapper[4409]: E1203 14:26:10.776352 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.777973 master-0 kubenswrapper[4409]: E1203 14:26:10.776459 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.776431684 +0000 UTC m=+5.103494230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.803832 master-0 kubenswrapper[4409]: I1203 14:26:10.803477 4409 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04e9e2a5-cdc2-42af-ab2c-49525390be6d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:26:08Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dv7j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-bbd9b9dff-rrfsm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 03 14:26:10.814840 master-0 kubenswrapper[4409]: I1203 14:26:10.814666 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:10.814840 master-0 kubenswrapper[4409]: I1203 14:26:10.814684 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:10.814840 master-0 kubenswrapper[4409]: I1203 14:26:10.814810 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:10.814840 master-0 kubenswrapper[4409]: E1203 14:26:10.814814 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.814869 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.814873 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.814909 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.814895 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.814867 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815072 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815077 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815113 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815119 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815151 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815193 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815215 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815216 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815235 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: E1203 14:26:10.815278 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: E1203 14:26:10.815316 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815327 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815369 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815379 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815391 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815406 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815474 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:10.815540 master-0 kubenswrapper[4409]: I1203 14:26:10.815557 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815596 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815607 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815617 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815622 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815624 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815713 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815729 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815739 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815756 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815782 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815788 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815810 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815820 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815685 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815833 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815675 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815693 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815703 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815880 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815720 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: E1203 14:26:10.815612 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815741 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815689 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815948 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815759 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815759 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815669 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815765 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815673 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815636 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815788 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815684 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815683 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815688 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815705 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: E1203 14:26:10.815872 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815719 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.816151 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815666 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815932 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815746 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.815917 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.816253 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.816275 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.816302 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: I1203 14:26:10.816283 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: E1203 14:26:10.816313 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: E1203 14:26:10.816384 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:10.816448 master-0 kubenswrapper[4409]: E1203 14:26:10.816465 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.816573 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.816634 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.816727 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.816869 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817032 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817149 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817204 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817257 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817313 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817374 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817437 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817521 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817588 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817637 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817695 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817753 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817794 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817857 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.817962 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.818082 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.818166 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:10.818321 master-0 kubenswrapper[4409]: E1203 14:26:10.818235 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:10.818991 master-0 kubenswrapper[4409]: E1203 14:26:10.818593 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:10.818991 master-0 kubenswrapper[4409]: E1203 14:26:10.818746 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:10.818991 master-0 kubenswrapper[4409]: E1203 14:26:10.818816 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:10.818991 master-0 kubenswrapper[4409]: E1203 14:26:10.818879 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:10.819140 master-0 kubenswrapper[4409]: E1203 14:26:10.818996 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:10.819140 master-0 kubenswrapper[4409]: E1203 14:26:10.819116 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:10.819237 master-0 kubenswrapper[4409]: E1203 14:26:10.819209 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:10.819373 master-0 kubenswrapper[4409]: E1203 14:26:10.819336 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:10.819629 master-0 kubenswrapper[4409]: E1203 14:26:10.819562 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:10.819706 master-0 kubenswrapper[4409]: E1203 14:26:10.819677 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:10.819806 master-0 kubenswrapper[4409]: E1203 14:26:10.819777 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:10.819894 master-0 kubenswrapper[4409]: E1203 14:26:10.819865 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:10.819964 master-0 kubenswrapper[4409]: E1203 14:26:10.819939 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:10.820057 master-0 kubenswrapper[4409]: E1203 14:26:10.820033 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:10.820152 master-0 kubenswrapper[4409]: E1203 14:26:10.820131 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:10.820210 master-0 kubenswrapper[4409]: E1203 14:26:10.820183 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:10.820419 master-0 kubenswrapper[4409]: E1203 14:26:10.820266 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:10.820419 master-0 kubenswrapper[4409]: E1203 14:26:10.820334 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:10.820517 master-0 kubenswrapper[4409]: E1203 14:26:10.820491 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:10.820641 master-0 kubenswrapper[4409]: E1203 14:26:10.820587 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:10.820751 master-0 kubenswrapper[4409]: E1203 14:26:10.820704 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:10.820905 master-0 kubenswrapper[4409]: E1203 14:26:10.820867 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:10.821033 master-0 kubenswrapper[4409]: E1203 14:26:10.820976 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:10.821190 master-0 kubenswrapper[4409]: E1203 14:26:10.821159 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:10.821274 master-0 kubenswrapper[4409]: E1203 14:26:10.821247 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:10.821389 master-0 kubenswrapper[4409]: E1203 14:26:10.821360 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:10.821444 master-0 kubenswrapper[4409]: E1203 14:26:10.821420 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:10.821569 master-0 kubenswrapper[4409]: E1203 14:26:10.821536 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:10.821895 master-0 kubenswrapper[4409]: E1203 14:26:10.821752 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:10.821961 master-0 kubenswrapper[4409]: E1203 14:26:10.821935 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:10.821961 master-0 kubenswrapper[4409]: E1203 14:26:10.821844 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:10.822081 master-0 kubenswrapper[4409]: E1203 14:26:10.822052 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:10.822193 master-0 kubenswrapper[4409]: E1203 14:26:10.822154 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:10.822261 master-0 kubenswrapper[4409]: E1203 14:26:10.822238 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:10.822391 master-0 kubenswrapper[4409]: E1203 14:26:10.822357 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:10.822462 master-0 kubenswrapper[4409]: E1203 14:26:10.822436 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:10.822575 master-0 kubenswrapper[4409]: E1203 14:26:10.822553 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:10.822661 master-0 kubenswrapper[4409]: E1203 14:26:10.822633 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:10.880357 master-0 kubenswrapper[4409]: I1203 14:26:10.880285 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: I1203 14:26:10.880496 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: I1203 14:26:10.880526 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: I1203 14:26:10.880573 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: I1203 14:26:10.880657 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880674 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880736 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880735 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880767 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880784 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880801 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880840 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.880909 master-0 kubenswrapper[4409]: E1203 14:26:10.880866 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.880944 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.880971 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.880983 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.880885 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881057 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881084 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.880900 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.880878766 +0000 UTC m=+5.207941272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881164 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881140754 +0000 UTC m=+5.208203260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881223 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881272 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881332 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881315249 +0000 UTC m=+5.208377755 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881358 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.8813492 +0000 UTC m=+5.208411786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881371 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.88136572 +0000 UTC m=+5.208428226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881386 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881401 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881410 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881447 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881437622 +0000 UTC m=+5.208500118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881490 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881536 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881566 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881651 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881668 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881676 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881680 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881691 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881708 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881713 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881696719 +0000 UTC m=+5.208759225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881715 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881740 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881748 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881764 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881722 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881770 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881794 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881794 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881768061 +0000 UTC m=+5.208830567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881803 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881772 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881854 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881845614 +0000 UTC m=+5.208908110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: E1203 14:26:10.881944 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881932706 +0000 UTC m=+5.208995212 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.881835 master-0 kubenswrapper[4409]: I1203 14:26:10.881951 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.881965 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.881957897 +0000 UTC m=+5.209020403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882066 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882078 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882100 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882101 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882167 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882144622 +0000 UTC m=+5.209207118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882201 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882206 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882217 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882226 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882229 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882235 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882308 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882286396 +0000 UTC m=+5.209348902 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882379 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882412 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882426 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.88241387 +0000 UTC m=+5.209476376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882491 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882508 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882516 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882541 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882534833 +0000 UTC m=+5.209597339 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882537 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882574 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882584 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882590 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882597 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882603 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882606 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882630 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882621976 +0000 UTC m=+5.209684482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882654 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882645636 +0000 UTC m=+5.209708422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882700 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882726 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882802 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882819 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882833 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882840 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882863 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882854622 +0000 UTC m=+5.209917128 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882898 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882909 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882923 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882931 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882935 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882953 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.882958 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882962 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.882991 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883017 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.882984496 +0000 UTC m=+5.210047002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883021 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883046 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883047 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883058 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883067 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883050 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.883042808 +0000 UTC m=+5.210105314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883198 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.883189642 +0000 UTC m=+5.210252148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.883223 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.883214853 +0000 UTC m=+5.210277359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.883931 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.883970 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.883998 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884054 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884076 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884094 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884104 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884122 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884130 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884174 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884140 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884196 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884207 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884216 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884228 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884236 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884227511 +0000 UTC m=+5.211290017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884182 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884255 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884246472 +0000 UTC m=+5.211309068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884262 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884274 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884278 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884292 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884299 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884185 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884308 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884297313 +0000 UTC m=+5.211359889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884317 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884327 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884328 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884319244 +0000 UTC m=+5.211381940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884397 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884440 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884430387 +0000 UTC m=+5.211492893 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884455 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884449518 +0000 UTC m=+5.211512024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884476 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884490 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884499 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884524 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884528 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.8845203 +0000 UTC m=+5.211582906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884579 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884590 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884596 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884617 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884609952 +0000 UTC m=+5.211672458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884618 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884669 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884679 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: E1203 14:26:10.884709 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:12.884699925 +0000 UTC m=+5.211762491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:10.884954 master-0 kubenswrapper[4409]: I1203 14:26:10.884575 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:10.906035 master-0 kubenswrapper[4409]: I1203 14:26:10.905940 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="d724802f2b9fda1d6f83046fb2d991d1e2565807167900e0a9654642e6c1ff2c" exitCode=0 Dec 03 14:26:10.906385 master-0 kubenswrapper[4409]: I1203 14:26:10.906062 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"d724802f2b9fda1d6f83046fb2d991d1e2565807167900e0a9654642e6c1ff2c"} Dec 03 14:26:10.910107 master-0 kubenswrapper[4409]: I1203 14:26:10.910024 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-b62gf" event={"ID":"b71ac8a5-987d-4eba-8bc0-a091f0a0de16","Type":"ContainerStarted","Data":"b74e074002f4edffeea63e3e37363f14661a2a1ce11239ab89824b36366b12bd"} Dec 03 14:26:10.911760 master-0 kubenswrapper[4409]: I1203 14:26:10.911705 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c49fbfc6f-7krqx" event={"ID":"ec89938d-35a5-46ba-8c63-12489db18cbd","Type":"ContainerStarted","Data":"5c5c761a4177c2daacf1db4d8dcb8e4bb0211557dc27d6501b43c8da2d3f8c17"} Dec 03 14:26:10.913296 master-0 kubenswrapper[4409]: I1203 14:26:10.913242 4409 generic.go:334] "Generic (PLEG): container finished" podID="77430348-b53a-4898-8047-be8bb542a0a7" containerID="b47bfc6fd8a031847f3a009ef9594f370de5055e016f0fec7e9db2d8ca44a8cd" exitCode=0 Dec 03 14:26:10.913366 master-0 kubenswrapper[4409]: I1203 14:26:10.913314 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerDied","Data":"b47bfc6fd8a031847f3a009ef9594f370de5055e016f0fec7e9db2d8ca44a8cd"} Dec 03 14:26:10.914964 master-0 kubenswrapper[4409]: I1203 14:26:10.914937 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"1529c138a39238a0af452cdcfd1d8f3b373b89f9f3111f11537b9fdcabafe313"} Dec 03 14:26:10.916740 master-0 kubenswrapper[4409]: I1203 14:26:10.916692 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"0ca60002734f62ffb9b09227c64089674375fcde4500b0e21d04c08d69a6ff9c"} Dec 03 14:26:10.918038 master-0 kubenswrapper[4409]: I1203 14:26:10.917976 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6cbf58c977-8lh6n" event={"ID":"e97e1725-cb55-4ce3-952d-a4fd0731577d","Type":"ContainerStarted","Data":"dbdfe24c60287128b1150074143e873a7689115e181d5dde71f2a24feb3f7f78"} Dec 03 14:26:10.919486 master-0 kubenswrapper[4409]: I1203 14:26:10.919298 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4xlhs" event={"ID":"42c95e54-b4ba-4b19-a97c-abcec840ac5d","Type":"ContainerStarted","Data":"8223ea2518b58c58ad309d1929d2ad5bdcdcb71fe5be5ee7ab9948c5eab67b43"} Dec 03 14:26:10.920764 master-0 kubenswrapper[4409]: I1203 14:26:10.920733 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-c8csx" event={"ID":"da583723-b3ad-4a6f-b586-09b739bd7f8c","Type":"ContainerStarted","Data":"68df71abbff9257b9691ad357af7b8ec3096b5d1bda8718ad9de4ce491bdfc43"} Dec 03 14:26:10.922033 master-0 kubenswrapper[4409]: I1203 14:26:10.921977 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-kk4tm" event={"ID":"c777c9de-1ace-46be-b5c2-c71d252f53f4","Type":"ContainerStarted","Data":"92715b6af64d2016cb38c24de4fec99d7a195c3d681d6fd824c332aab1db2122"} Dec 03 14:26:10.922149 master-0 kubenswrapper[4409]: I1203 14:26:10.922128 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:11.195867 master-0 kubenswrapper[4409]: I1203 14:26:11.195793 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:11.195867 master-0 kubenswrapper[4409]: I1203 14:26:11.195875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: E1203 14:26:11.195991 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: E1203 14:26:11.196028 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: E1203 14:26:11.196042 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: E1203 14:26:11.196095 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196078685 +0000 UTC m=+5.523141191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: I1203 14:26:11.196136 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: I1203 14:26:11.196199 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:11.196226 master-0 kubenswrapper[4409]: E1203 14:26:11.196200 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196240 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196257 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196280 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196291 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196297 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196309 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196322 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196302911 +0000 UTC m=+5.523365417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: I1203 14:26:11.196238 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196352 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196344862 +0000 UTC m=+5.523407358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196328 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196367 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196437 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196480 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196471366 +0000 UTC m=+5.523533872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196488 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196504 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196511 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196522 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: E1203 14:26:11.196531 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.196540 master-0 kubenswrapper[4409]: I1203 14:26:11.196453 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196558 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196548968 +0000 UTC m=+5.523611484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196625 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.19660442 +0000 UTC m=+5.523666996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: I1203 14:26:11.196667 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: I1203 14:26:11.196760 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196784 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196801 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196808 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196835 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.196828886 +0000 UTC m=+5.523891392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196951 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196976 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.196985 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.197411 master-0 kubenswrapper[4409]: E1203 14:26:11.197041 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.197028262 +0000 UTC m=+5.524090768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.284360 master-0 kubenswrapper[4409]: I1203 14:26:11.284291 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:11.301409 master-0 kubenswrapper[4409]: I1203 14:26:11.301347 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:11.301409 master-0 kubenswrapper[4409]: I1203 14:26:11.301396 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:11.301733 master-0 kubenswrapper[4409]: E1203 14:26:11.301571 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:11.301733 master-0 kubenswrapper[4409]: E1203 14:26:11.301602 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.301733 master-0 kubenswrapper[4409]: E1203 14:26:11.301617 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.301733 master-0 kubenswrapper[4409]: E1203 14:26:11.301705 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.301682499 +0000 UTC m=+5.628745055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.301948 master-0 kubenswrapper[4409]: E1203 14:26:11.301892 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:11.302025 master-0 kubenswrapper[4409]: E1203 14:26:11.301965 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.302025 master-0 kubenswrapper[4409]: E1203 14:26:11.301991 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.302193 master-0 kubenswrapper[4409]: E1203 14:26:11.302153 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.302119412 +0000 UTC m=+5.629181948 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.346630 master-0 kubenswrapper[4409]: I1203 14:26:11.346536 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Dec 03 14:26:11.404925 master-0 kubenswrapper[4409]: I1203 14:26:11.404847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:11.404925 master-0 kubenswrapper[4409]: I1203 14:26:11.404931 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405095 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405152 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405165 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405247 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.405226066 +0000 UTC m=+5.732288582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405332 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405355 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405369 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.406127 master-0 kubenswrapper[4409]: E1203 14:26:11.405414 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.405399811 +0000 UTC m=+5.732462317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.407130 master-0 kubenswrapper[4409]: I1203 14:26:11.407105 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:11.407226 master-0 kubenswrapper[4409]: E1203 14:26:11.407210 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:11.407282 master-0 kubenswrapper[4409]: E1203 14:26:11.407226 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.407282 master-0 kubenswrapper[4409]: E1203 14:26:11.407239 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.407282 master-0 kubenswrapper[4409]: E1203 14:26:11.407265 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.407258173 +0000 UTC m=+5.734320679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.417776 master-0 kubenswrapper[4409]: I1203 14:26:11.417238 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:11.417776 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:11.417776 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:11.417776 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:11.418053 master-0 kubenswrapper[4409]: I1203 14:26:11.417787 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:11.435597 master-0 kubenswrapper[4409]: I1203 14:26:11.435525 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:11.435757 master-0 kubenswrapper[4409]: I1203 14:26:11.435734 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:11.611946 master-0 kubenswrapper[4409]: I1203 14:26:11.611888 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:11.612189 master-0 kubenswrapper[4409]: I1203 14:26:11.611981 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:11.612294 master-0 kubenswrapper[4409]: E1203 14:26:11.612241 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:11.612294 master-0 kubenswrapper[4409]: E1203 14:26:11.612293 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.612410 master-0 kubenswrapper[4409]: E1203 14:26:11.612310 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.612479 master-0 kubenswrapper[4409]: E1203 14:26:11.612417 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:11.612538 master-0 kubenswrapper[4409]: E1203 14:26:11.612489 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:11.612538 master-0 kubenswrapper[4409]: E1203 14:26:11.612491 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.612468083 +0000 UTC m=+5.939530609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.612538 master-0 kubenswrapper[4409]: E1203 14:26:11.612511 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.612729 master-0 kubenswrapper[4409]: E1203 14:26:11.612578 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:11.612729 master-0 kubenswrapper[4409]: E1203 14:26:11.612598 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:11.612729 master-0 kubenswrapper[4409]: I1203 14:26:11.612450 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:11.612729 master-0 kubenswrapper[4409]: E1203 14:26:11.612621 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.612589306 +0000 UTC m=+5.939651962 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:11.612729 master-0 kubenswrapper[4409]: E1203 14:26:11.612652 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.612635988 +0000 UTC m=+5.939698494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:11.612963 master-0 kubenswrapper[4409]: I1203 14:26:11.612909 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:11.613153 master-0 kubenswrapper[4409]: E1203 14:26:11.613114 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:11.613153 master-0 kubenswrapper[4409]: E1203 14:26:11.613142 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:11.613252 master-0 kubenswrapper[4409]: E1203 14:26:11.613194 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:13.613180803 +0000 UTC m=+5.940243489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:11.927795 master-0 kubenswrapper[4409]: I1203 14:26:11.927730 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:11.927795 master-0 kubenswrapper[4409]: I1203 14:26:11.927759 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"f0295ea8cb6bafcade2d690fad3966e7f64a43a62ac5f6edc3b01e458671fcb3"} Dec 03 14:26:12.419495 master-0 kubenswrapper[4409]: I1203 14:26:12.419442 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:12.419495 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:12.419495 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:12.419495 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:12.419775 master-0 kubenswrapper[4409]: I1203 14:26:12.419501 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:12.656146 master-0 kubenswrapper[4409]: I1203 14:26:12.655831 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.656146 master-0 kubenswrapper[4409]: I1203 14:26:12.656135 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:12.656146 master-0 kubenswrapper[4409]: I1203 14:26:12.656182 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.656219 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.656243 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.656264 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.656291 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656317 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656314 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656765 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656716 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656857 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.656816829 +0000 UTC m=+8.983879335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.656898 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656960 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656978 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.656978 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.656918832 +0000 UTC m=+8.983981458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657030 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.656999164 +0000 UTC m=+8.984061670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657088 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657178 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657234 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657283 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657334 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657380 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657449 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657497 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.657549 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657577 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657529 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657724 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657780 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657628 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.657604031 +0000 UTC m=+8.984666537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.657965 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658030 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.657997123 +0000 UTC m=+8.985059619 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658041 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658073 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.658059854 +0000 UTC m=+8.985122360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658211 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.658058 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658254 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.658235249 +0000 UTC m=+8.985297755 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658272 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: I1203 14:26:12.658316 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:12.658450 master-0 kubenswrapper[4409]: E1203 14:26:12.658483 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.658612 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.658622 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.658843 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.658847 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.658990 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659099 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659244 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659124324 +0000 UTC m=+8.986186821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659356 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.65930628 +0000 UTC m=+8.986368776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659403 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659391252 +0000 UTC m=+8.986453748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659446 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659411153 +0000 UTC m=+8.986473659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659460 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659453294 +0000 UTC m=+8.986515800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659480 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659466784 +0000 UTC m=+8.986529290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659493 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659485935 +0000 UTC m=+8.986548441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659510 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659499395 +0000 UTC m=+8.986561901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659522 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659515886 +0000 UTC m=+8.986578392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659569 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659560837 +0000 UTC m=+8.986623343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659614 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659574667 +0000 UTC m=+8.986637173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659627 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659620229 +0000 UTC m=+8.986682725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: E1203 14:26:12.659644 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.659633699 +0000 UTC m=+8.986696205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659667 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659759 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659820 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659860 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659888 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659945 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.659974 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.660215 master-0 kubenswrapper[4409]: I1203 14:26:12.660259 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660295 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660399 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660455 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660511 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660547 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660572 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.660602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661072 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661147 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661175 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661197 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661226 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661253 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661336 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661419 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.661306526 +0000 UTC m=+8.988369032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661506 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661450 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66143249 +0000 UTC m=+8.988494996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: I1203 14:26:12.661261 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.661561 master-0 kubenswrapper[4409]: E1203 14:26:12.661542 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.661530493 +0000 UTC m=+8.988592999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: I1203 14:26:12.661610 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: I1203 14:26:12.661661 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: I1203 14:26:12.661768 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661617 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661963 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661983 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662044 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662081 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662059698 +0000 UTC m=+8.989122204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662115 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662102339 +0000 UTC m=+8.989164845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662126 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661745 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662154 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66213034 +0000 UTC m=+8.989192846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661767 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662182 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662166141 +0000 UTC m=+8.989228647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662188 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662191 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662130 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662238 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662263 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662217762 +0000 UTC m=+8.989280268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661855 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662309 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662282584 +0000 UTC m=+8.989345100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662310 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662331 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662330 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662321255 +0000 UTC m=+8.989383771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661923 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662372 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662356046 +0000 UTC m=+8.989418552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661938 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662406 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662393977 +0000 UTC m=+8.989456483 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661796 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.662432 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662421998 +0000 UTC m=+8.989484504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661820 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: E1203 14:26:12.661722 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:12.662410 master-0 kubenswrapper[4409]: I1203 14:26:12.662229 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662465 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662449629 +0000 UTC m=+8.989512135 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662623 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662568692 +0000 UTC m=+8.989631208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662646 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662675 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662653325 +0000 UTC m=+8.989715851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662722 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662693826 +0000 UTC m=+8.989756352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662760 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662739567 +0000 UTC m=+8.989802093 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.662984 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.662969534 +0000 UTC m=+8.990032050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663046 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663031115 +0000 UTC m=+8.990093641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: I1203 14:26:12.663097 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663257 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663633 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663655 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663632502 +0000 UTC m=+8.990695028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663692 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663682174 +0000 UTC m=+8.990744680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: I1203 14:26:12.663620 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663721 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663702484 +0000 UTC m=+8.990765000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663788 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663758826 +0000 UTC m=+8.990821342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663813 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.663801017 +0000 UTC m=+8.990863533 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: I1203 14:26:12.663856 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663952 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: I1203 14:26:12.663953 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.663961 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.664055 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.664027494 +0000 UTC m=+8.991090000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.664068 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:12.664071 master-0 kubenswrapper[4409]: E1203 14:26:12.664122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.664105896 +0000 UTC m=+8.991168422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: I1203 14:26:12.664130 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: E1203 14:26:12.664268 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.664145057 +0000 UTC m=+8.991207573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: E1203 14:26:12.664296 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: E1203 14:26:12.664300 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: I1203 14:26:12.664302 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: E1203 14:26:12.665149 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665134625 +0000 UTC m=+8.992197131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:12.665204 master-0 kubenswrapper[4409]: E1203 14:26:12.665178 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665171356 +0000 UTC m=+8.992233862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: I1203 14:26:12.665269 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: I1203 14:26:12.665298 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: I1203 14:26:12.665319 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: E1203 14:26:12.665328 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: E1203 14:26:12.665407 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: E1203 14:26:12.665425 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: I1203 14:26:12.665340 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: E1203 14:26:12.665472 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:12.665499 master-0 kubenswrapper[4409]: E1203 14:26:12.665484 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665445084 +0000 UTC m=+8.992507600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665517 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665504955 +0000 UTC m=+8.992567471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665547 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: I1203 14:26:12.665568 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665592 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665637 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665625659 +0000 UTC m=+8.992688175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: I1203 14:26:12.665632 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665685 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66567037 +0000 UTC m=+8.992732876 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665695 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665709 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665703461 +0000 UTC m=+8.992765957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: E1203 14:26:12.665738 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665727442 +0000 UTC m=+8.992790158 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:12.665847 master-0 kubenswrapper[4409]: I1203 14:26:12.665813 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.665891 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.665893 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.665919 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.665905557 +0000 UTC m=+8.992968263 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.665961 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.666029 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666036 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666054 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666087 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666051 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666041281 +0000 UTC m=+8.993103907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666112683 +0000 UTC m=+8.993175199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.666152 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.666191 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666213 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666216 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666201565 +0000 UTC m=+8.993264081 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666267 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: E1203 14:26:12.666328 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666276287 +0000 UTC m=+8.993338973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:12.666366 master-0 kubenswrapper[4409]: I1203 14:26:12.666380 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666473 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666510 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666500474 +0000 UTC m=+8.993562980 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666535 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666521814 +0000 UTC m=+8.993584320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666573 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666612 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666638 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666676 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666668668 +0000 UTC m=+8.993731404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666639 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666705 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666684049 +0000 UTC m=+8.993746575 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666736 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666745 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666776 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666765171 +0000 UTC m=+8.993827867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666838 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666865 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666876 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666865374 +0000 UTC m=+8.993927890 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666913 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666927 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666945 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.666959 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.666949396 +0000 UTC m=+8.994011912 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.666985 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.667024 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.667051 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.667068 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.667098 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.667108 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667100531 +0000 UTC m=+8.994163037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.667119 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: I1203 14:26:12.667135 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:12.667132 master-0 kubenswrapper[4409]: E1203 14:26:12.667141 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667188 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667205 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667158 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667147192 +0000 UTC m=+8.994209918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667218 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667243 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667246 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667229864 +0000 UTC m=+8.994292370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667164 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667284 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667316 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667302766 +0000 UTC m=+8.994365462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667344 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667384 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667391 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667381459 +0000 UTC m=+8.994444275 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667403 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667434 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66742208 +0000 UTC m=+8.994484806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667489 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667477841 +0000 UTC m=+8.994540347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667524 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667513812 +0000 UTC m=+8.994576538 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667570 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667611 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667652 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667636466 +0000 UTC m=+8.994698982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667613 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667675 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667665397 +0000 UTC m=+8.994727903 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667686 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667704 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667727 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667740 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667729949 +0000 UTC m=+8.994792635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667801 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667821 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667845 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667827681 +0000 UTC m=+8.994890387 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667903 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667944 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667934164 +0000 UTC m=+8.994996860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667944 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.667908 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.667985 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.667975605 +0000 UTC m=+8.995038282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668047 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668083 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668127 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668195 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668211 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668230 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668242 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668231323 +0000 UTC m=+8.995293829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668270 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668287 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668276434 +0000 UTC m=+8.995338950 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668300 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668312 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668301695 +0000 UTC m=+8.995364221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668267 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668340 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668330166 +0000 UTC m=+8.995392672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668374 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668412 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668429 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668451 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668471 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: I1203 14:26:12.668488 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:12.668423 master-0 kubenswrapper[4409]: E1203 14:26:12.668501 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668506 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66849262 +0000 UTC m=+8.995555126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668544 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668539 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668526281 +0000 UTC m=+8.995588977 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668584 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668605 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668621 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668623 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668643 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668632164 +0000 UTC m=+8.995694870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668665 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668658055 +0000 UTC m=+8.995720631 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668688 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668701 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668714 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668707046 +0000 UTC m=+8.995769552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668738 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668759 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668804 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668779238 +0000 UTC m=+8.995841764 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668817 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668771 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668836 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668839 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66882617 +0000 UTC m=+8.995888906 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668912 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.668947 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668968 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668952983 +0000 UTC m=+8.996015689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.668997 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.668986564 +0000 UTC m=+8.996049280 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669043 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669031545 +0000 UTC m=+8.996094261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669043 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669057 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669080 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669096 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669086227 +0000 UTC m=+8.996148753 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669094 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669118 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669108898 +0000 UTC m=+8.996171414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669154 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669130708 +0000 UTC m=+8.996193254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669183 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669210 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66920284 +0000 UTC m=+8.996265346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669240 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669374 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669500 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669556 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669583 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669591 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669581471 +0000 UTC m=+8.996643987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669633 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669705 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669721 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669731 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669719305 +0000 UTC m=+8.996781821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669771 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669778 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669773 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669814 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669784 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669878 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669833 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669818898 +0000 UTC m=+8.996881484 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669910 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669916 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.66990638 +0000 UTC m=+8.996968896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.669944 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669966 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669951012 +0000 UTC m=+8.997013728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.669979 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670023 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.669991533 +0000 UTC m=+8.997054319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670081 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670126 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670110416 +0000 UTC m=+8.997172922 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670131 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670145 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670139097 +0000 UTC m=+8.997201603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670170 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670179 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670168468 +0000 UTC m=+8.997231154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670187 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670236 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670250 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670269 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670210 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670275 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670269191 +0000 UTC m=+8.997331697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670338 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670374 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670387 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670345043 +0000 UTC m=+8.997407599 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670449 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670427985 +0000 UTC m=+8.997490701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670524 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670547 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670567 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670548669 +0000 UTC m=+8.997611195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670616 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670690 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670717 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670705613 +0000 UTC m=+8.997768119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670743 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670771 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670793 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670836 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670847 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670820186 +0000 UTC m=+8.997882742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670862 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670866 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.670892 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.670885028 +0000 UTC m=+8.997947534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.670974 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671030 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671016372 +0000 UTC m=+8.998078898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671057 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671046713 +0000 UTC m=+8.998109439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.671082 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.671107 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671119 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.671132 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.671154 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671266 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671330 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.67131737 +0000 UTC m=+8.998379896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671341 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671373 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671385 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: E1203 14:26:12.671383 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:12.671232 master-0 kubenswrapper[4409]: I1203 14:26:12.671318 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671355 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671347521 +0000 UTC m=+8.998410027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671500 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671486225 +0000 UTC m=+8.998548741 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671536 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671522156 +0000 UTC m=+8.998584672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671565 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671552587 +0000 UTC m=+8.998615103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671592 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671582808 +0000 UTC m=+8.998645324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671633 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671671 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671707 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671765 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671802 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671794294 +0000 UTC m=+8.998856800 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671804 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671820 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671851 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671839815 +0000 UTC m=+8.998902511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671763 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671879 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671868136 +0000 UTC m=+8.998930862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671903 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671914 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.671959 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.671997 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.671974329 +0000 UTC m=+8.999036945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672072 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672110 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672097752 +0000 UTC m=+8.999160268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672101 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672031 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672227 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672249 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672198 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672272 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672253137 +0000 UTC m=+8.999315823 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672315 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672306568 +0000 UTC m=+8.999369074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672329 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672323749 +0000 UTC m=+8.999386255 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672349 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672378 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672402 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672424 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672449 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672473 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672450 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672524 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672537 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672513 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672475 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672566 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672549605 +0000 UTC m=+8.999612291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672582 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672650 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672638588 +0000 UTC m=+8.999701094 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672668 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672661438 +0000 UTC m=+8.999723944 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672689 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672713 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672708 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.672737 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672851 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672731 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672750 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672735851 +0000 UTC m=+8.999798527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672770 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672819 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.672946 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.672937356 +0000 UTC m=+8.999999862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673146 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673184 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673227 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673218944 +0000 UTC m=+9.000281450 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673251 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673241655 +0000 UTC m=+9.000304161 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673263 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673258005 +0000 UTC m=+9.000320511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673277 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673270466 +0000 UTC m=+9.000332962 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673293 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673285796 +0000 UTC m=+9.000348302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673308 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673301207 +0000 UTC m=+9.000363713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673332 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673363 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673389 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673422 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673482 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673533 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673545 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673556 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673591 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673623 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673625 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673594015 +0000 UTC m=+9.000656721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673662 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673690 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673673897 +0000 UTC m=+9.000736593 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673707 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673711 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673664 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673731 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673672 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673744 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673734259 +0000 UTC m=+9.000796965 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673825 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673902 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673885383 +0000 UTC m=+9.000947909 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673947 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.673983 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674022 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.673992316 +0000 UTC m=+9.001054822 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674040 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674032277 +0000 UTC m=+9.001094783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.673986 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674058 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674049848 +0000 UTC m=+9.001112354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674073 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674065748 +0000 UTC m=+9.001128254 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674078 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674085 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674080489 +0000 UTC m=+9.001142995 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674100 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674094469 +0000 UTC m=+9.001156975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674187 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674219 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674271 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674262864 +0000 UTC m=+9.001325370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674259 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674307 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674332 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674326356 +0000 UTC m=+9.001388862 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674326 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674370 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674390 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674382307 +0000 UTC m=+9.001444813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674407 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674416 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674434 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674481 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674451029 +0000 UTC m=+9.001513575 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674493 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674498 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674506 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674528 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674506 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674591 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674634 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674544 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674532791 +0000 UTC m=+9.001595297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: E1203 14:26:12.674666 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674657005 +0000 UTC m=+9.001719511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:12.675900 master-0 kubenswrapper[4409]: I1203 14:26:12.674716 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: I1203 14:26:12.674739 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: I1203 14:26:12.674760 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: I1203 14:26:12.674788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674815 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674852 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.67484175 +0000 UTC m=+9.001904256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674873 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674865061 +0000 UTC m=+9.001927567 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674888 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674881821 +0000 UTC m=+9.001944327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674896 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674921 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674936 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.674977 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674957443 +0000 UTC m=+9.002020119 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.675038 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.674992864 +0000 UTC m=+9.002055580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:12.681253 master-0 kubenswrapper[4409]: E1203 14:26:12.675076 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.675062816 +0000 UTC m=+9.002125382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:12.733393 master-0 kubenswrapper[4409]: I1203 14:26:12.732684 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:12.733393 master-0 kubenswrapper[4409]: I1203 14:26:12.733176 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:12.738320 master-0 kubenswrapper[4409]: I1203 14:26:12.738271 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:12.777912 master-0 kubenswrapper[4409]: I1203 14:26:12.777861 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:12.778299 master-0 kubenswrapper[4409]: E1203 14:26:12.778057 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:12.778299 master-0 kubenswrapper[4409]: E1203 14:26:12.778081 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.778299 master-0 kubenswrapper[4409]: E1203 14:26:12.778093 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.778299 master-0 kubenswrapper[4409]: E1203 14:26:12.778153 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.77813639 +0000 UTC m=+9.105198896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.815113 master-0 kubenswrapper[4409]: I1203 14:26:12.815051 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:12.815113 master-0 kubenswrapper[4409]: I1203 14:26:12.815088 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815112 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815207 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815224 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: E1203 14:26:12.815205 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815232 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815270 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815268 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815253 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.815412 master-0 kubenswrapper[4409]: I1203 14:26:12.815349 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:12.815747 master-0 kubenswrapper[4409]: I1203 14:26:12.815709 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:12.815747 master-0 kubenswrapper[4409]: I1203 14:26:12.815733 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815743 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815739 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815773 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815783 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815798 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815763 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815815 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.815796 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815803 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815920 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815787 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815772 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815971 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815796 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816034 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816053 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.816041 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816069 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815818 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816121 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816126 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816132 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816166 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.815824 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816071 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816139 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816221 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.816230 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816205 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816260 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816264 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816226 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816143 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816120 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816267 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816264 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.816315 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816250 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816357 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816326 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816303 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816427 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816324 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816460 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816489 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.816475 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816516 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: E1203 14:26:12.816532 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816549 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816575 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:12.816531 master-0 kubenswrapper[4409]: I1203 14:26:12.816595 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816629 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.816634 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816671 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.816689 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816765 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.816796 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816814 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816840 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816858 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816883 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816902 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816935 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: I1203 14:26:12.816957 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817314 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817335 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817437 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817501 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817694 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817791 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817852 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.817925 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818127 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818214 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818288 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818429 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818499 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818563 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:12.818605 master-0 kubenswrapper[4409]: E1203 14:26:12.818626 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.818825 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.818950 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.819039 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.819127 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.819305 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:12.819435 master-0 kubenswrapper[4409]: E1203 14:26:12.819404 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:12.819638 master-0 kubenswrapper[4409]: E1203 14:26:12.819478 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:12.819638 master-0 kubenswrapper[4409]: E1203 14:26:12.819587 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:12.819758 master-0 kubenswrapper[4409]: E1203 14:26:12.819711 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:12.819942 master-0 kubenswrapper[4409]: E1203 14:26:12.819892 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:12.820033 master-0 kubenswrapper[4409]: E1203 14:26:12.820000 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:12.820133 master-0 kubenswrapper[4409]: E1203 14:26:12.820102 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:12.820341 master-0 kubenswrapper[4409]: E1203 14:26:12.820279 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:12.820393 master-0 kubenswrapper[4409]: E1203 14:26:12.820371 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:12.820458 master-0 kubenswrapper[4409]: E1203 14:26:12.820441 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:12.820613 master-0 kubenswrapper[4409]: E1203 14:26:12.820585 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:12.820782 master-0 kubenswrapper[4409]: E1203 14:26:12.820746 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:12.820819 master-0 kubenswrapper[4409]: E1203 14:26:12.820803 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:12.820938 master-0 kubenswrapper[4409]: E1203 14:26:12.820904 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:12.820998 master-0 kubenswrapper[4409]: E1203 14:26:12.820974 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:12.821051 master-0 kubenswrapper[4409]: E1203 14:26:12.821034 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:12.821271 master-0 kubenswrapper[4409]: E1203 14:26:12.821227 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:12.821358 master-0 kubenswrapper[4409]: E1203 14:26:12.821334 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:12.821418 master-0 kubenswrapper[4409]: E1203 14:26:12.821400 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:12.821665 master-0 kubenswrapper[4409]: E1203 14:26:12.821602 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:12.821717 master-0 kubenswrapper[4409]: E1203 14:26:12.821641 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:12.821717 master-0 kubenswrapper[4409]: E1203 14:26:12.821700 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:12.821800 master-0 kubenswrapper[4409]: E1203 14:26:12.821784 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:12.821848 master-0 kubenswrapper[4409]: E1203 14:26:12.821829 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:12.821940 master-0 kubenswrapper[4409]: E1203 14:26:12.821920 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:12.822147 master-0 kubenswrapper[4409]: E1203 14:26:12.822106 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:12.822225 master-0 kubenswrapper[4409]: E1203 14:26:12.822197 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:12.822375 master-0 kubenswrapper[4409]: E1203 14:26:12.822341 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:12.822426 master-0 kubenswrapper[4409]: E1203 14:26:12.822381 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:12.822515 master-0 kubenswrapper[4409]: E1203 14:26:12.822480 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:12.822559 master-0 kubenswrapper[4409]: E1203 14:26:12.822535 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:12.822608 master-0 kubenswrapper[4409]: E1203 14:26:12.822582 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:12.822655 master-0 kubenswrapper[4409]: E1203 14:26:12.822631 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:12.822720 master-0 kubenswrapper[4409]: E1203 14:26:12.822700 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:12.822884 master-0 kubenswrapper[4409]: E1203 14:26:12.822852 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:12.822932 master-0 kubenswrapper[4409]: E1203 14:26:12.822900 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:12.823015 master-0 kubenswrapper[4409]: E1203 14:26:12.822985 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:12.823075 master-0 kubenswrapper[4409]: E1203 14:26:12.823048 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:12.933222 master-0 kubenswrapper[4409]: I1203 14:26:12.933170 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/5.log" Dec 03 14:26:12.934403 master-0 kubenswrapper[4409]: I1203 14:26:12.933646 4409 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="f0295ea8cb6bafcade2d690fad3966e7f64a43a62ac5f6edc3b01e458671fcb3" exitCode=255 Dec 03 14:26:12.934403 master-0 kubenswrapper[4409]: I1203 14:26:12.933730 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"f0295ea8cb6bafcade2d690fad3966e7f64a43a62ac5f6edc3b01e458671fcb3"} Dec 03 14:26:12.934403 master-0 kubenswrapper[4409]: I1203 14:26:12.934137 4409 scope.go:117] "RemoveContainer" containerID="f0295ea8cb6bafcade2d690fad3966e7f64a43a62ac5f6edc3b01e458671fcb3" Dec 03 14:26:12.935873 master-0 kubenswrapper[4409]: I1203 14:26:12.935844 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-tlsvq" event={"ID":"6b681889-eb2c-41fb-a1dc-69b99227b45b","Type":"ContainerStarted","Data":"0e02a2c152c8e0c5c936c45390f05da6e1bc28d9f6680694a4adb77da48debcd"} Dec 03 14:26:12.939221 master-0 kubenswrapper[4409]: I1203 14:26:12.939196 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="2f6b2d74566bd85f20d4721c8ec43317c92dc4c6472989d1177639e579447cb6" exitCode=0 Dec 03 14:26:12.939313 master-0 kubenswrapper[4409]: I1203 14:26:12.939265 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"2f6b2d74566bd85f20d4721c8ec43317c92dc4c6472989d1177639e579447cb6"} Dec 03 14:26:12.941261 master-0 kubenswrapper[4409]: I1203 14:26:12.941220 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-n24qb" event={"ID":"6ef37bba-85d9-4303-80c0-aac3dc49d3d9","Type":"ContainerStarted","Data":"161f90fbb75085bca4ec1c97de12feee3a918bf2d2f8a651d29ba3e66848427f"} Dec 03 14:26:12.945778 master-0 kubenswrapper[4409]: I1203 14:26:12.945738 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"350baa075127d9473e1aa994bef76334239fb3fb79b24813acad4e15d7d59074"} Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945778 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"1a49cdb54672cde8c8f34f34135ea81287589daaef18b1d26abb90c51e26a56f"} Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945795 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"d6d80ac7f04216818ee16cc0f594297eb6f29561b3216d8d4ae0b28c6bb90cd8"} Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945807 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"6d76d1d103f4180af10450724dd18123d7bcf6d3bf049681d76b91045a8c7243"} Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945816 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945818 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"9bbc960420625fc02c874dfb4ab1775ac06d0f309e64a664280911f767f73ed2"} Dec 03 14:26:12.945873 master-0 kubenswrapper[4409]: I1203 14:26:12.945869 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"e6e710e75f1fe02b87279f42d510478b508090af941b8de73d8ac0aa39303981"} Dec 03 14:26:12.952117 master-0 kubenswrapper[4409]: I1203 14:26:12.951988 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983253 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983358 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983505 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983543 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983608 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.983586836 +0000 UTC m=+9.310649342 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983523 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983657 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983678 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983681 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: I1203 14:26:12.983795 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:12.983790 master-0 kubenswrapper[4409]: E1203 14:26:12.983813 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.983861 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983867 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983890 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983903 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983657 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983940 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983900 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.983877354 +0000 UTC m=+9.310939910 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983746 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984075 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984083 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984106 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984107 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.98409787 +0000 UTC m=+9.311160376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983915 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984137 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984148 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984158 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984159 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984175 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983954 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.983699 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984213 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984216 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984197483 +0000 UTC m=+9.311260079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984222 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984235 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984227484 +0000 UTC m=+9.311290100 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984237 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984255 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984264 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984286 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984280225 +0000 UTC m=+9.311342831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984348 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984395 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984403 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984411 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984418 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984382508 +0000 UTC m=+9.311445034 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984455 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.98444409 +0000 UTC m=+9.311506706 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984484 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984475701 +0000 UTC m=+9.311538297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984543 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984608 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984627 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984639 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984640 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984630925 +0000 UTC m=+9.311693441 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984676 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984663696 +0000 UTC m=+9.311726282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984724 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984839 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984852 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984859 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984884 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984875032 +0000 UTC m=+9.311937528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984854 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984912 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.984919 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984930 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984940 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984969 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.984960365 +0000 UTC m=+9.312022931 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.984997 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985020 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985026 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.985024 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985049 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.985041547 +0000 UTC m=+9.312104053 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.985068 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985091 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985105 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985113 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985146 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.98513769 +0000 UTC m=+9.312200266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: I1203 14:26:12.985201 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985269 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985281 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985288 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985320 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.985312725 +0000 UTC m=+9.312375231 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985359 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985420 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985441 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.985536 master-0 kubenswrapper[4409]: E1203 14:26:12.985532 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.9855039 +0000 UTC m=+9.312566406 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.986951 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.986998 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987061 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987082 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987099 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987109 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987111 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987144 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987131976 +0000 UTC m=+9.314194482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987189 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987191 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987213 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987239 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987251 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987204 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987259 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987276 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987288 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987278751 +0000 UTC m=+9.314341257 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987248 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987333 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987341 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987351 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987361 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987333 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987320742 +0000 UTC m=+9.314383438 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987405 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987439 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987457 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987468 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987512 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987497067 +0000 UTC m=+9.314559763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987537 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987527008 +0000 UTC m=+9.314589744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987556 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987545488 +0000 UTC m=+9.314608214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987720 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987839 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987845 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987857 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987868 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.987878 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987898 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.987888388 +0000 UTC m=+9.314950964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987959 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987974 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.987983 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.988029 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.988053 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.988064 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.988101 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.988091144 +0000 UTC m=+9.315153740 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.988131 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.988353 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: I1203 14:26:12.988379 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:12.988364 master-0 kubenswrapper[4409]: E1203 14:26:12.988413 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988455 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988467 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988472 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988488 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.988480585 +0000 UTC m=+9.315543091 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988533 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.988512446 +0000 UTC m=+9.315574982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988477 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988598 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.988583698 +0000 UTC m=+9.315646244 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988638 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988651 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988658 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: I1203 14:26:12.988424 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988682 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.98867622 +0000 UTC m=+9.315738726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988724 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988734 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.988741 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: I1203 14:26:12.989040 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.989215 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.989200485 +0000 UTC m=+9.316263001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.989373 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.989399 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.989415 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:12.990333 master-0 kubenswrapper[4409]: E1203 14:26:12.989464 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:16.989447172 +0000 UTC m=+9.316509708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.022621 master-0 kubenswrapper[4409]: E1203 14:26:13.021271 4409 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:13.205826 master-0 kubenswrapper[4409]: I1203 14:26:13.205760 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:13.205941 master-0 kubenswrapper[4409]: I1203 14:26:13.205855 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:13.205987 master-0 kubenswrapper[4409]: E1203 14:26:13.205963 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:13.206062 master-0 kubenswrapper[4409]: E1203 14:26:13.205994 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.206062 master-0 kubenswrapper[4409]: E1203 14:26:13.206043 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206147 master-0 kubenswrapper[4409]: E1203 14:26:13.206067 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:13.206147 master-0 kubenswrapper[4409]: E1203 14:26:13.206090 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.206147 master-0 kubenswrapper[4409]: E1203 14:26:13.206105 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206147 master-0 kubenswrapper[4409]: E1203 14:26:13.206117 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.206096666 +0000 UTC m=+9.533159212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206329 master-0 kubenswrapper[4409]: E1203 14:26:13.206151 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.206132017 +0000 UTC m=+9.533194573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206329 master-0 kubenswrapper[4409]: I1203 14:26:13.206085 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:13.206329 master-0 kubenswrapper[4409]: E1203 14:26:13.206222 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:13.206329 master-0 kubenswrapper[4409]: E1203 14:26:13.206245 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.206329 master-0 kubenswrapper[4409]: E1203 14:26:13.206260 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: I1203 14:26:13.206343 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: E1203 14:26:13.206429 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: E1203 14:26:13.206447 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: I1203 14:26:13.206444 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: E1203 14:26:13.206459 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206517 master-0 kubenswrapper[4409]: E1203 14:26:13.206479 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.206457526 +0000 UTC m=+9.533520102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: E1203 14:26:13.206598 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: E1203 14:26:13.206620 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: E1203 14:26:13.206625 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.20660183 +0000 UTC m=+9.533664346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: E1203 14:26:13.206636 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: E1203 14:26:13.206703 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.206686193 +0000 UTC m=+9.533748749 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.206759 master-0 kubenswrapper[4409]: I1203 14:26:13.206748 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: I1203 14:26:13.206781 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: I1203 14:26:13.206831 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206886 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206903 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206908 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206930 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206942 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206914 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206978 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.206967281 +0000 UTC m=+9.534029787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.206994 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.207031 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.207020862 +0000 UTC m=+9.534083368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.207048 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.207078 master-0 kubenswrapper[4409]: E1203 14:26:13.207060 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.207800 master-0 kubenswrapper[4409]: E1203 14:26:13.207191 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.207179377 +0000 UTC m=+9.534241963 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.311462 master-0 kubenswrapper[4409]: I1203 14:26:13.311417 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:13.311462 master-0 kubenswrapper[4409]: I1203 14:26:13.311474 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:13.311983 master-0 kubenswrapper[4409]: E1203 14:26:13.311963 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:13.312116 master-0 kubenswrapper[4409]: E1203 14:26:13.312102 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.312280 master-0 kubenswrapper[4409]: E1203 14:26:13.312218 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.312366 master-0 kubenswrapper[4409]: E1203 14:26:13.312125 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:13.312366 master-0 kubenswrapper[4409]: E1203 14:26:13.312359 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.312505 master-0 kubenswrapper[4409]: E1203 14:26:13.312377 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.312655 master-0 kubenswrapper[4409]: E1203 14:26:13.312596 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.312259346 +0000 UTC m=+9.639321853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.312655 master-0 kubenswrapper[4409]: E1203 14:26:13.312629 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.312620687 +0000 UTC m=+9.639683193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.415278 master-0 kubenswrapper[4409]: I1203 14:26:13.414949 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:13.415492 master-0 kubenswrapper[4409]: I1203 14:26:13.415336 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:13.415492 master-0 kubenswrapper[4409]: I1203 14:26:13.415368 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:13.415492 master-0 kubenswrapper[4409]: E1203 14:26:13.415215 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:13.415492 master-0 kubenswrapper[4409]: E1203 14:26:13.415422 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.415492 master-0 kubenswrapper[4409]: E1203 14:26:13.415442 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.415658 master-0 kubenswrapper[4409]: E1203 14:26:13.415579 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:13.415658 master-0 kubenswrapper[4409]: E1203 14:26:13.415602 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.415658 master-0 kubenswrapper[4409]: E1203 14:26:13.415612 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.415751 master-0 kubenswrapper[4409]: E1203 14:26:13.415676 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.415643668 +0000 UTC m=+9.742706194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.415751 master-0 kubenswrapper[4409]: E1203 14:26:13.415699 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:13.415871 master-0 kubenswrapper[4409]: E1203 14:26:13.415762 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.415871 master-0 kubenswrapper[4409]: E1203 14:26:13.415830 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.415964 master-0 kubenswrapper[4409]: E1203 14:26:13.415921 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.415845694 +0000 UTC m=+9.742908230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.416171 master-0 kubenswrapper[4409]: E1203 14:26:13.416144 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.416127092 +0000 UTC m=+9.743189618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.418029 master-0 kubenswrapper[4409]: I1203 14:26:13.417969 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:13.418029 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:13.418029 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:13.418029 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:13.418167 master-0 kubenswrapper[4409]: I1203 14:26:13.418095 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:13.628550 master-0 kubenswrapper[4409]: I1203 14:26:13.628319 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:13.628550 master-0 kubenswrapper[4409]: I1203 14:26:13.628452 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:13.628926 master-0 kubenswrapper[4409]: E1203 14:26:13.628725 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:13.628926 master-0 kubenswrapper[4409]: E1203 14:26:13.628787 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.628926 master-0 kubenswrapper[4409]: E1203 14:26:13.628809 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.629146 master-0 kubenswrapper[4409]: I1203 14:26:13.628960 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:13.629146 master-0 kubenswrapper[4409]: E1203 14:26:13.629075 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.628984568 +0000 UTC m=+9.956047074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.629257 master-0 kubenswrapper[4409]: E1203 14:26:13.629105 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:13.629257 master-0 kubenswrapper[4409]: E1203 14:26:13.629170 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:13.629257 master-0 kubenswrapper[4409]: E1203 14:26:13.629201 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:13.629257 master-0 kubenswrapper[4409]: E1203 14:26:13.629241 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.629437 master-0 kubenswrapper[4409]: E1203 14:26:13.629202 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:13.629524 master-0 kubenswrapper[4409]: E1203 14:26:13.629481 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.629440461 +0000 UTC m=+9.956503057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:13.629583 master-0 kubenswrapper[4409]: E1203 14:26:13.629539 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.629518304 +0000 UTC m=+9.956581080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:13.629855 master-0 kubenswrapper[4409]: I1203 14:26:13.629824 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:13.630117 master-0 kubenswrapper[4409]: E1203 14:26:13.630077 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:13.630187 master-0 kubenswrapper[4409]: E1203 14:26:13.630119 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:13.630235 master-0 kubenswrapper[4409]: E1203 14:26:13.630217 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:17.630180062 +0000 UTC m=+9.957242598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:13.709283 master-0 kubenswrapper[4409]: I1203 14:26:13.709157 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:13.709747 master-0 kubenswrapper[4409]: I1203 14:26:13.709448 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:13.717050 master-0 kubenswrapper[4409]: I1203 14:26:13.716948 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:13.955099 master-0 kubenswrapper[4409]: I1203 14:26:13.954953 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="44111ff32c843054d81b68581e3b72d07cf85f147fe7f67fcb64a40a694a95a3" exitCode=0 Dec 03 14:26:13.955099 master-0 kubenswrapper[4409]: I1203 14:26:13.955048 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"44111ff32c843054d81b68581e3b72d07cf85f147fe7f67fcb64a40a694a95a3"} Dec 03 14:26:13.958190 master-0 kubenswrapper[4409]: I1203 14:26:13.958143 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/6.log" Dec 03 14:26:13.959077 master-0 kubenswrapper[4409]: I1203 14:26:13.959020 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/5.log" Dec 03 14:26:13.959790 master-0 kubenswrapper[4409]: I1203 14:26:13.959729 4409 generic.go:334] "Generic (PLEG): container finished" podID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" containerID="aa024d4c0653252afb473b187106942d48c2412c2b937333e81a6fb1ddebaaf4" exitCode=255 Dec 03 14:26:13.960298 master-0 kubenswrapper[4409]: I1203 14:26:13.960015 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerDied","Data":"aa024d4c0653252afb473b187106942d48c2412c2b937333e81a6fb1ddebaaf4"} Dec 03 14:26:13.960298 master-0 kubenswrapper[4409]: I1203 14:26:13.960147 4409 scope.go:117] "RemoveContainer" containerID="f0295ea8cb6bafcade2d690fad3966e7f64a43a62ac5f6edc3b01e458671fcb3" Dec 03 14:26:13.960618 master-0 kubenswrapper[4409]: I1203 14:26:13.960409 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:13.961221 master-0 kubenswrapper[4409]: I1203 14:26:13.961175 4409 scope.go:117] "RemoveContainer" containerID="aa024d4c0653252afb473b187106942d48c2412c2b937333e81a6fb1ddebaaf4" Dec 03 14:26:13.961734 master-0 kubenswrapper[4409]: E1203 14:26:13.961684 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46)\"" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" Dec 03 14:26:14.416879 master-0 kubenswrapper[4409]: I1203 14:26:14.416812 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:14.416879 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:14.416879 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:14.416879 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:14.417572 master-0 kubenswrapper[4409]: I1203 14:26:14.416885 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814304 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814349 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814407 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814465 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: E1203 14:26:14.814498 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814529 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814518 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814548 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814518 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814643 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814729 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:14.814737 master-0 kubenswrapper[4409]: I1203 14:26:14.814771 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814782 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814797 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814801 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: E1203 14:26:14.814808 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814854 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814861 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814842 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814856 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814887 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814901 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814908 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814924 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814870 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814936 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814943 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814874 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814878 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814929 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814931 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814870 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814997 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815043 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814877 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814973 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815068 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815155 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815164 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815174 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815070 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815197 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815075 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815077 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.814845 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815130 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815251 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815133 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815166 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815283 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815061 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815054 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: E1203 14:26:14.815053 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815036 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815176 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815337 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815364 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815407 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: E1203 14:26:14.815359 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815430 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815453 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815461 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815436 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815482 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815434 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815512 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815483 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815464 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815494 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:14.815522 master-0 kubenswrapper[4409]: I1203 14:26:14.815483 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: I1203 14:26:14.815713 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.815720 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: I1203 14:26:14.815740 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.815923 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816023 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816122 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816251 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816329 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816387 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816479 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816567 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816644 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816730 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816817 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816922 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.816960 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817048 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817203 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817336 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817376 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817488 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817615 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:14.817875 master-0 kubenswrapper[4409]: E1203 14:26:14.817852 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.817917 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.817991 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818082 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818172 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818221 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818289 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818350 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818597 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:14.819119 master-0 kubenswrapper[4409]: E1203 14:26:14.818758 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:14.819531 master-0 kubenswrapper[4409]: E1203 14:26:14.819131 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:14.819531 master-0 kubenswrapper[4409]: E1203 14:26:14.819203 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:14.819531 master-0 kubenswrapper[4409]: E1203 14:26:14.819262 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:14.819531 master-0 kubenswrapper[4409]: E1203 14:26:14.819367 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:14.819531 master-0 kubenswrapper[4409]: E1203 14:26:14.819504 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:14.819748 master-0 kubenswrapper[4409]: E1203 14:26:14.819609 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:14.819748 master-0 kubenswrapper[4409]: E1203 14:26:14.819721 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:14.819901 master-0 kubenswrapper[4409]: E1203 14:26:14.819852 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:14.819993 master-0 kubenswrapper[4409]: E1203 14:26:14.819954 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:14.820125 master-0 kubenswrapper[4409]: E1203 14:26:14.820082 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:14.820256 master-0 kubenswrapper[4409]: E1203 14:26:14.820218 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:14.820346 master-0 kubenswrapper[4409]: E1203 14:26:14.820315 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:14.820523 master-0 kubenswrapper[4409]: E1203 14:26:14.820467 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:14.820587 master-0 kubenswrapper[4409]: E1203 14:26:14.820521 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:14.820639 master-0 kubenswrapper[4409]: E1203 14:26:14.820603 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:14.820683 master-0 kubenswrapper[4409]: E1203 14:26:14.820645 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:14.820752 master-0 kubenswrapper[4409]: E1203 14:26:14.820724 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:14.820844 master-0 kubenswrapper[4409]: E1203 14:26:14.820817 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:14.820918 master-0 kubenswrapper[4409]: E1203 14:26:14.820893 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:14.821031 master-0 kubenswrapper[4409]: E1203 14:26:14.820982 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:14.821132 master-0 kubenswrapper[4409]: E1203 14:26:14.821106 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:14.821290 master-0 kubenswrapper[4409]: E1203 14:26:14.821258 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:14.821397 master-0 kubenswrapper[4409]: E1203 14:26:14.821371 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:14.821494 master-0 kubenswrapper[4409]: E1203 14:26:14.821471 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:14.821623 master-0 kubenswrapper[4409]: E1203 14:26:14.821597 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:14.821801 master-0 kubenswrapper[4409]: E1203 14:26:14.821765 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:14.821909 master-0 kubenswrapper[4409]: E1203 14:26:14.821884 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:14.821963 master-0 kubenswrapper[4409]: E1203 14:26:14.821937 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:14.822053 master-0 kubenswrapper[4409]: E1203 14:26:14.822032 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:14.822133 master-0 kubenswrapper[4409]: E1203 14:26:14.822113 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:14.822238 master-0 kubenswrapper[4409]: E1203 14:26:14.822212 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:14.822331 master-0 kubenswrapper[4409]: E1203 14:26:14.822301 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:14.822471 master-0 kubenswrapper[4409]: E1203 14:26:14.822444 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:14.822532 master-0 kubenswrapper[4409]: E1203 14:26:14.822511 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:14.964636 master-0 kubenswrapper[4409]: I1203 14:26:14.964565 4409 scope.go:117] "RemoveContainer" containerID="aa024d4c0653252afb473b187106942d48c2412c2b937333e81a6fb1ddebaaf4" Dec 03 14:26:14.965172 master-0 kubenswrapper[4409]: E1203 14:26:14.964947 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-cb84b9cdf-qn94w_openshift-cluster-machine-approver(a9b62b2f-1e7a-4f1b-a988-4355d93dda46)\"" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" podUID="a9b62b2f-1e7a-4f1b-a988-4355d93dda46" Dec 03 14:26:15.415593 master-0 kubenswrapper[4409]: I1203 14:26:15.415456 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:15.415593 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:15.415593 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:15.415593 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:15.415593 master-0 kubenswrapper[4409]: I1203 14:26:15.415514 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:15.972411 master-0 kubenswrapper[4409]: I1203 14:26:15.972340 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"8b09fd5bf853c0766c9d7f246179855ceb1627ea40f3c969995eac1a4288539b"} Dec 03 14:26:15.974440 master-0 kubenswrapper[4409]: I1203 14:26:15.974378 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/6.log" Dec 03 14:26:15.978656 master-0 kubenswrapper[4409]: I1203 14:26:15.978619 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"b8ac266f4abe7e4dc30c2e905713c6cf7a9148963f07ad18fb7353dbb7c97a11"} Dec 03 14:26:16.415659 master-0 kubenswrapper[4409]: I1203 14:26:16.415601 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:16.415659 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:16.415659 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:16.415659 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:16.415980 master-0 kubenswrapper[4409]: I1203 14:26:16.415679 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:16.666447 master-0 kubenswrapper[4409]: I1203 14:26:16.666305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:16.666447 master-0 kubenswrapper[4409]: I1203 14:26:16.666400 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: I1203 14:26:16.666448 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: I1203 14:26:16.666530 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: I1203 14:26:16.666574 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: E1203 14:26:16.666574 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: I1203 14:26:16.666616 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: E1203 14:26:16.666630 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:16.666685 master-0 kubenswrapper[4409]: I1203 14:26:16.666656 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666675 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666729 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.666693142 +0000 UTC m=+16.993755708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666772 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666789 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.666753474 +0000 UTC m=+16.993816060 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666792 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666793 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666825 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.666808435 +0000 UTC m=+16.993871071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:16.666902 master-0 kubenswrapper[4409]: E1203 14:26:16.666882 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: I1203 14:26:16.666908 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: E1203 14:26:16.666966 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: E1203 14:26:16.666988 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.666956859 +0000 UTC m=+16.994019395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: E1203 14:26:16.667058 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667040702 +0000 UTC m=+16.994103238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: I1203 14:26:16.667099 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.667178 master-0 kubenswrapper[4409]: I1203 14:26:16.667152 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: I1203 14:26:16.667196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667215 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667229 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667199646 +0000 UTC m=+16.994262202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667273 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667255758 +0000 UTC m=+16.994318304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667282 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667316 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667301529 +0000 UTC m=+16.994364165 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:16.667366 master-0 kubenswrapper[4409]: E1203 14:26:16.667316 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: I1203 14:26:16.667366 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: E1203 14:26:16.667400 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667376221 +0000 UTC m=+16.994438767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: I1203 14:26:16.667459 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: E1203 14:26:16.667477 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: E1203 14:26:16.667530 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: I1203 14:26:16.667535 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: E1203 14:26:16.667546 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667525305 +0000 UTC m=+16.994587881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:16.667643 master-0 kubenswrapper[4409]: E1203 14:26:16.667635 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667610448 +0000 UTC m=+16.994672994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667646 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667685 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667664509 +0000 UTC m=+16.994727155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667714 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.66770003 +0000 UTC m=+16.994762576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: I1203 14:26:16.667789 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667825 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667836 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667816154 +0000 UTC m=+16.994878690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: E1203 14:26:16.667873 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.667851785 +0000 UTC m=+16.994914291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:16.667913 master-0 kubenswrapper[4409]: I1203 14:26:16.667910 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: I1203 14:26:16.667956 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: I1203 14:26:16.667995 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: I1203 14:26:16.668037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: I1203 14:26:16.668062 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: E1203 14:26:16.668067 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: E1203 14:26:16.668123 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668105662 +0000 UTC m=+16.995168228 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: E1203 14:26:16.668138 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: E1203 14:26:16.668153 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: E1203 14:26:16.668156 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:16.668201 master-0 kubenswrapper[4409]: I1203 14:26:16.668173 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668190 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668174244 +0000 UTC m=+16.995236780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668220 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668248 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668229865 +0000 UTC m=+16.995292521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668248 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668280 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668264976 +0000 UTC m=+16.995327622 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668334 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668312448 +0000 UTC m=+16.995374994 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: I1203 14:26:16.668391 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668426 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: I1203 14:26:16.668466 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:16.668519 master-0 kubenswrapper[4409]: E1203 14:26:16.668475 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668460122 +0000 UTC m=+16.995522668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668544 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668525344 +0000 UTC m=+16.995587950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668551 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668592 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668579085 +0000 UTC m=+16.995641631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668589 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668647 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668690 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668733 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668693 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668773 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668773 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668833 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668802962 +0000 UTC m=+16.995865508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668845 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668850 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668867 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668852893 +0000 UTC m=+16.995915549 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668741 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668891 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668879264 +0000 UTC m=+16.995941800 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.668927 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.668903745 +0000 UTC m=+16.995966291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.668980 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.669090 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.669118 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: I1203 14:26:16.669150 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.669162 master-0 kubenswrapper[4409]: E1203 14:26:16.669159 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669140451 +0000 UTC m=+16.996203107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669205 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669224 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669203383 +0000 UTC m=+16.996266019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669268 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669316 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669356 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669331127 +0000 UTC m=+16.996393803 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669406 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669428 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669437 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669455 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669464 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.6694562 +0000 UTC m=+16.996518696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669504 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669491551 +0000 UTC m=+16.996554097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669520 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669556 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669562 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669550323 +0000 UTC m=+16.996612829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669601 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669590234 +0000 UTC m=+16.996652770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669627 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669649 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669692 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669674056 +0000 UTC m=+16.996736652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669741 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669785 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669772559 +0000 UTC m=+16.996835105 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669791 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669896 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669906 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: E1203 14:26:16.669936 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.669923084 +0000 UTC m=+16.996985710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:16.669965 master-0 kubenswrapper[4409]: I1203 14:26:16.669972 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670073 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670118 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670123 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670143 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67013747 +0000 UTC m=+16.997199966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670149 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670179 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670238 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670180541 +0000 UTC m=+16.997243117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670284 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670299 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670321 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670355 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670334925 +0000 UTC m=+16.997397501 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670407 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670503 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670505 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670555 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670528081 +0000 UTC m=+16.997590677 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670571 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670633 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670613773 +0000 UTC m=+16.997676389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670675 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670708 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: I1203 14:26:16.670723 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.670745 master-0 kubenswrapper[4409]: E1203 14:26:16.670734 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670726846 +0000 UTC m=+16.997789352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.670773 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.670776 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670764387 +0000 UTC m=+16.997826923 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.670829 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670811079 +0000 UTC m=+16.997873765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.670877 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.670932 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.670948 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.670964 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670954993 +0000 UTC m=+16.998017649 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.670991 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671002 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.670984524 +0000 UTC m=+16.998047070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671056 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671067 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671080 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671074146 +0000 UTC m=+16.998136652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671084 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671112 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671125 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671137 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671131818 +0000 UTC m=+16.998194324 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671196 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671217 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.6712093 +0000 UTC m=+16.998271806 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671240 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671243 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671290 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671316 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671311633 +0000 UTC m=+16.998374139 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671322 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671342 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671380 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671364264 +0000 UTC m=+16.998426810 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671416 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671421 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671443 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671401315 +0000 UTC m=+16.998463911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: I1203 14:26:16.671345 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671474 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671458397 +0000 UTC m=+16.998520943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:16.671480 master-0 kubenswrapper[4409]: E1203 14:26:16.671499 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671488668 +0000 UTC m=+16.998551214 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671567 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671610 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671651 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671680 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671736 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671732 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671767 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671755695 +0000 UTC m=+16.998818211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671792 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671779866 +0000 UTC m=+16.998842452 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671794 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671815 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671827 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671854 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671881 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671858008 +0000 UTC m=+16.998920594 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671902 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671941 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67192711 +0000 UTC m=+16.998989646 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.671940 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671959 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671963 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.671971 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.671960591 +0000 UTC m=+16.999023127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.672208 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672274 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672353 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672332402 +0000 UTC m=+16.999394948 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672354 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672387 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672370903 +0000 UTC m=+16.999433529 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.672278 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672418 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672401244 +0000 UTC m=+16.999463780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: E1203 14:26:16.672441 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672431035 +0000 UTC m=+16.999493571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:16.672454 master-0 kubenswrapper[4409]: I1203 14:26:16.672472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672518 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672561 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672646 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672677 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672697 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672683842 +0000 UTC m=+16.999746388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672746 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672684 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672748 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672727473 +0000 UTC m=+16.999790089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672789 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672809 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672794145 +0000 UTC m=+16.999856681 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672750 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672841 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672866 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672845346 +0000 UTC m=+16.999907942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672909 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.672891198 +0000 UTC m=+16.999953734 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672959 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.672987 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.673064 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673040952 +0000 UTC m=+17.000103538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.672961 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: E1203 14:26:16.673108 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673090623 +0000 UTC m=+17.000153259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.673187 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.673288 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.673344 master-0 kubenswrapper[4409]: I1203 14:26:16.673357 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673392 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.673431 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673460 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673464 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673442533 +0000 UTC m=+17.000505159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673483 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673544 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673528956 +0000 UTC m=+17.000591492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673566 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.673536 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673607 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673633 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673571 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673558757 +0000 UTC m=+17.000621303 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673649 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673786 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673765152 +0000 UTC m=+17.000827688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.673823 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.673888 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673914 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673887156 +0000 UTC m=+17.000949722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673926 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.673969 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.673932 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.674056 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.673998179 +0000 UTC m=+17.001060725 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: E1203 14:26:16.674085 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674075901 +0000 UTC m=+17.001138407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:16.674108 master-0 kubenswrapper[4409]: I1203 14:26:16.674108 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674146 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674178 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674192 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674234 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674227906 +0000 UTC m=+17.001290412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674155 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674287 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674265717 +0000 UTC m=+17.001328263 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674332 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674384 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674340 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674423 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674430 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674455 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674447452 +0000 UTC m=+17.001509958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674493 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674472082 +0000 UTC m=+17.001534698 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674515 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674514 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674544 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674562 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674551115 +0000 UTC m=+17.001613741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674658 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674630057 +0000 UTC m=+17.001692613 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: I1203 14:26:16.674721 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:16.674842 master-0 kubenswrapper[4409]: E1203 14:26:16.674774 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.674863 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.674895 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674864564 +0000 UTC m=+17.001927130 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.674944 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.674924695 +0000 UTC m=+17.001987331 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.674985 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675038 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675096 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67507492 +0000 UTC m=+17.002137466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675134 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675148 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675197 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675180243 +0000 UTC m=+17.002242789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675234 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675283 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675243 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675322 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675356 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675338667 +0000 UTC m=+17.002401203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675380 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675378 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675417 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: I1203 14:26:16.675395 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675422 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675410319 +0000 UTC m=+17.002472865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675469 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:16.675504 master-0 kubenswrapper[4409]: E1203 14:26:16.675491 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675478841 +0000 UTC m=+17.002541347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675536 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675561 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675590 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675605 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675593564 +0000 UTC m=+17.002656100 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675636 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675643 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675631535 +0000 UTC m=+17.002694071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675691 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675690 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675673247 +0000 UTC m=+17.002735783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675719 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675733 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675743 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675763 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675746889 +0000 UTC m=+17.002809435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675862 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.675903 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675911 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675935 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675921054 +0000 UTC m=+17.002983600 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675964 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.675953865 +0000 UTC m=+17.003016401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.675993 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.676038 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676029127 +0000 UTC m=+17.003091633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.676062 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.676085 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676067858 +0000 UTC m=+17.003130494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.676100 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: E1203 14:26:16.676123 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676117129 +0000 UTC m=+17.003179635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:16.676126 master-0 kubenswrapper[4409]: I1203 14:26:16.676120 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676168 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676210 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676269 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676281 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676330 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676313895 +0000 UTC m=+17.003376441 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676346 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676343 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676438 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676463 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676459 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676484 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676470719 +0000 UTC m=+17.003533255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676514 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67649825 +0000 UTC m=+17.003560786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676543 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676530871 +0000 UTC m=+17.003593407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676600 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676631 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676606583 +0000 UTC m=+17.003669129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676662 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676685 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676701 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676690175 +0000 UTC m=+17.003752721 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676745 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676788 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676772858 +0000 UTC m=+17.003835404 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676841 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: I1203 14:26:16.676888 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.676893 master-0 kubenswrapper[4409]: E1203 14:26:16.676848 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.676914 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.676927 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.676942 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676933942 +0000 UTC m=+17.003996448 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.676966 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.676953203 +0000 UTC m=+17.004015739 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677001 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677055 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677086 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677111 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677136 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677156 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677179 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677158619 +0000 UTC m=+17.004221235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677186 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677216 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677245 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677229691 +0000 UTC m=+17.004292357 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677275 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677260802 +0000 UTC m=+17.004323438 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677280 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677328 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677314933 +0000 UTC m=+17.004377469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677328 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677359 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677348034 +0000 UTC m=+17.004410570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677380 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677410 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677426 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677413166 +0000 UTC m=+17.004475702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677490 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677535 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677593 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677635 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677646 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677632092 +0000 UTC m=+17.004694638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: I1203 14:26:16.677636 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:16.677691 master-0 kubenswrapper[4409]: E1203 14:26:16.677690 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677675453 +0000 UTC m=+17.004737989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677699 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677784 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677772026 +0000 UTC m=+17.004834562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677705 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.677819 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677827 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677817777 +0000 UTC m=+17.004880283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.677880 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.677920 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677940 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.677962 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677989 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.677976482 +0000 UTC m=+17.005039018 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.677990 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678061 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678072 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678056 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678089 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678068995 +0000 UTC m=+17.005131601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678131 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678121176 +0000 UTC m=+17.005183822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678145 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678151 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678140727 +0000 UTC m=+17.005203363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678262 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678312 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678409 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678392204 +0000 UTC m=+17.005454740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678404 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678428 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678477 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678488 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678500 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678481536 +0000 UTC m=+17.005544102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678493 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678526 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678515897 +0000 UTC m=+17.005578443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678550 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678539828 +0000 UTC m=+17.005602364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678581 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678621 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678646 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67862785 +0000 UTC m=+17.005690436 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678701 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678744 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678731043 +0000 UTC m=+17.005793589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: I1203 14:26:16.678698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.678756 master-0 kubenswrapper[4409]: E1203 14:26:16.678786 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.678820 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.678860 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678837776 +0000 UTC m=+17.005900352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.678874 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.678906 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.678897078 +0000 UTC m=+17.005959904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.678908 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.678962 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679083 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679050452 +0000 UTC m=+17.006113018 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679120 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679152 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679143455 +0000 UTC m=+17.006206091 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679144 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679195 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679224 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679271 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679228 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679314 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679291029 +0000 UTC m=+17.006353665 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679315 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679353 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67933534 +0000 UTC m=+17.006397976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679403 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679477 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679516 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679489675 +0000 UTC m=+17.006552211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679523 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679572 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679571 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679583 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679567227 +0000 UTC m=+17.006629763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679671 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679676 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.67965513 +0000 UTC m=+17.006717676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679794 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: I1203 14:26:16.679860 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679874 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679848915 +0000 UTC m=+17.006911521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679939 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:16.679978 master-0 kubenswrapper[4409]: E1203 14:26:16.679972 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: I1203 14:26:16.679937 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.679983 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.679990 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.679976399 +0000 UTC m=+17.007038935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680161 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.680139603 +0000 UTC m=+17.007202229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: I1203 14:26:16.680208 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680255 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: I1203 14:26:16.680278 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680312 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.680284687 +0000 UTC m=+17.007347243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680329 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680351 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.680334969 +0000 UTC m=+17.007397515 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: I1203 14:26:16.680442 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680527 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.680501684 +0000 UTC m=+17.007564230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680626 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:16.680977 master-0 kubenswrapper[4409]: E1203 14:26:16.680706 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.680686509 +0000 UTC m=+17.007749055 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:16.782424 master-0 kubenswrapper[4409]: I1203 14:26:16.782346 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:16.782698 master-0 kubenswrapper[4409]: E1203 14:26:16.782646 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:16.782698 master-0 kubenswrapper[4409]: E1203 14:26:16.782695 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.782865 master-0 kubenswrapper[4409]: E1203 14:26:16.782712 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.782865 master-0 kubenswrapper[4409]: E1203 14:26:16.782792 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.782772384 +0000 UTC m=+17.109834890 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.784474 master-0 kubenswrapper[4409]: I1203 14:26:16.784433 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:16.784561 master-0 kubenswrapper[4409]: I1203 14:26:16.784487 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.784561 master-0 kubenswrapper[4409]: E1203 14:26:16.784528 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:16.784561 master-0 kubenswrapper[4409]: I1203 14:26:16.784550 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.784681 master-0 kubenswrapper[4409]: I1203 14:26:16.784579 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.784681 master-0 kubenswrapper[4409]: E1203 14:26:16.784608 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.784585645 +0000 UTC m=+17.111648241 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:16.784681 master-0 kubenswrapper[4409]: E1203 14:26:16.784639 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:16.784681 master-0 kubenswrapper[4409]: E1203 14:26:16.784655 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:16.784681 master-0 kubenswrapper[4409]: I1203 14:26:16.784661 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: E1203 14:26:16.784690 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: E1203 14:26:16.784705 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: E1203 14:26:16.784674 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.784665437 +0000 UTC m=+17.111728063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: I1203 14:26:16.784779 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: E1203 14:26:16.784830 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: I1203 14:26:16.784852 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.784889 master-0 kubenswrapper[4409]: E1203 14:26:16.784871 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.784861653 +0000 UTC m=+17.111924159 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: I1203 14:26:16.784927 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.784943 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.784918375 +0000 UTC m=+17.111980931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.785040 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.784989437 +0000 UTC m=+17.112051983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.785050 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.785065 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.785052148 +0000 UTC m=+17.112114704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: I1203 14:26:16.785103 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.785132 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: I1203 14:26:16.785148 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:16.785200 master-0 kubenswrapper[4409]: E1203 14:26:16.785176 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.785162232 +0000 UTC m=+17.112224778 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785216 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785251 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785264 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785298 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.785285245 +0000 UTC m=+17.112347791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785331 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785346 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785384 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785419 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785429 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785457 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.78544393 +0000 UTC m=+17.112506476 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: I1203 14:26:16.785491 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785521 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785540 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.785559 master-0 kubenswrapper[4409]: E1203 14:26:16.785560 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785602 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.785589544 +0000 UTC m=+17.112652090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: I1203 14:26:16.785538 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: I1203 14:26:16.785679 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: I1203 14:26:16.785725 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: I1203 14:26:16.785769 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785611 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785858 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785900 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785671 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785947 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785737 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785786 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.785804 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.785790889 +0000 UTC m=+17.112853435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.786040 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:16.786195 master-0 kubenswrapper[4409]: E1203 14:26:16.786060 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786043847 +0000 UTC m=+17.113106393 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786261 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786237412 +0000 UTC m=+17.113299958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786290 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786279433 +0000 UTC m=+17.113341979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786310 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786300444 +0000 UTC m=+17.113362990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786330 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786320624 +0000 UTC m=+17.113383170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786357 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786344525 +0000 UTC m=+17.113407071 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786379 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786369156 +0000 UTC m=+17.113431702 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786399 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786388816 +0000 UTC m=+17.113451362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:16.786785 master-0 kubenswrapper[4409]: E1203 14:26:16.786418 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.786409097 +0000 UTC m=+17.113471643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814573 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814626 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814667 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814678 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814635 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814731 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814942 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814951 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: E1203 14:26:16.814946 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814998 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.814977 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815039 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815080 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815087 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815119 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815041 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815162 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815167 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:16.815167 master-0 kubenswrapper[4409]: I1203 14:26:16.815173 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815201 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815241 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815082 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815130 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815108 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815122 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815403 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815083 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815044 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815427 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815363 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.815362 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815447 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815370 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815485 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815418 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815507 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815450 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815531 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815555 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815549 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815585 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815369 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815614 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815529 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815659 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815906 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815540 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815946 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815949 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816157 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815586 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815411 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815496 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815535 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816261 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815397 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815627 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815560 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815976 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.815951 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815550 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816035 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816048 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816058 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816061 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816081 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816082 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816093 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816082 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816096 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815590 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.815413 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: I1203 14:26:16.816035 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816638 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816479 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816541 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816431 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816354 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816703 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.816842 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.817350 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.817407 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.817498 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:16.817502 master-0 kubenswrapper[4409]: E1203 14:26:16.817618 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.817715 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.817772 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.817871 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.817931 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.818234 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.818343 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.818521 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.818658 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.818840 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819042 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819137 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819296 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819441 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819516 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819716 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.819917 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820065 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820211 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820346 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820555 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820675 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820787 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.820957 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821073 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821170 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821344 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821481 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821637 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821734 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.821836 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.822054 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:16.822231 master-0 kubenswrapper[4409]: E1203 14:26:16.822269 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.822504 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.822660 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.822785 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.822909 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.823073 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.823155 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.823259 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.823576 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:16.823845 master-0 kubenswrapper[4409]: E1203 14:26:16.823499 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:16.824406 master-0 kubenswrapper[4409]: E1203 14:26:16.824062 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:16.824406 master-0 kubenswrapper[4409]: E1203 14:26:16.824231 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:16.824406 master-0 kubenswrapper[4409]: E1203 14:26:16.824386 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:16.824603 master-0 kubenswrapper[4409]: E1203 14:26:16.824557 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:16.824852 master-0 kubenswrapper[4409]: E1203 14:26:16.824789 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:16.824929 master-0 kubenswrapper[4409]: E1203 14:26:16.824859 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:16.825100 master-0 kubenswrapper[4409]: E1203 14:26:16.825045 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:16.825217 master-0 kubenswrapper[4409]: E1203 14:26:16.825172 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:16.825389 master-0 kubenswrapper[4409]: E1203 14:26:16.825324 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:16.825507 master-0 kubenswrapper[4409]: E1203 14:26:16.825463 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:16.825610 master-0 kubenswrapper[4409]: E1203 14:26:16.825572 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:16.991178 master-0 kubenswrapper[4409]: I1203 14:26:16.990659 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="b8ac266f4abe7e4dc30c2e905713c6cf7a9148963f07ad18fb7353dbb7c97a11" exitCode=0 Dec 03 14:26:16.991178 master-0 kubenswrapper[4409]: I1203 14:26:16.990711 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"b8ac266f4abe7e4dc30c2e905713c6cf7a9148963f07ad18fb7353dbb7c97a11"} Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993503 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993546 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993585 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993616 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993645 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993758 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993771 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993798 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993842 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993852 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993862 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993864 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993873 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993884 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993803 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993929 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993953 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.993935552 +0000 UTC m=+17.320998058 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993780 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993968 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.993962123 +0000 UTC m=+17.321024629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993971 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993981 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.993974143 +0000 UTC m=+17.321036759 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993985 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993995 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994073 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.994027465 +0000 UTC m=+17.321089961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.993928 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994104 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994120 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994118 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994129 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994147 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.993974 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994165 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994169 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.994159528 +0000 UTC m=+17.321222114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994236 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.99421205 +0000 UTC m=+17.321274596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994250 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994268 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994278 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.994294 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994332 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994340 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.994311833 +0000 UTC m=+17.321374379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994385 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994396 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.994402 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994420 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.994412315 +0000 UTC m=+17.321474822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994559 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994572 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994579 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994680 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.994669593 +0000 UTC m=+17.321732099 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.994948 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.99493236 +0000 UTC m=+17.321994936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.995409 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995505 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995535 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995545 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995571 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.995563938 +0000 UTC m=+17.322626444 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.995587 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.995643 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995707 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995746 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995767 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.995811 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995840 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.995814355 +0000 UTC m=+17.322876901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995907 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995918 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995976 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996053 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.995940 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996116 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996098 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996072503 +0000 UTC m=+17.323135049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996204 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996220 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996234 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996276 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996261968 +0000 UTC m=+17.323324494 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996334 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996367 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996388 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996426 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996367 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996440 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996480 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996440 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996425163 +0000 UTC m=+17.323487669 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996552 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996531316 +0000 UTC m=+17.323593862 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996651 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996663 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996652319 +0000 UTC m=+17.323714825 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996730 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996750 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996761 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996818 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996897 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.996882965 +0000 UTC m=+17.323945501 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.996920 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997069 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997075 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997098 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.996987 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997166 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.997144443 +0000 UTC m=+17.324206989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997082 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997262 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.997389 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997405 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.99738074 +0000 UTC m=+17.324443276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997466 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997505 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.997497 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997538 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.997527814 +0000 UTC m=+17.324590380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997566 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997649 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997671 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: I1203 14:26:16.997695 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:16.997622 master-0 kubenswrapper[4409]: E1203 14:26:16.997737 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.997716159 +0000 UTC m=+17.324778705 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.997855 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.997884 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.997904 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.997981 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.997958626 +0000 UTC m=+17.325021162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.998229 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.998335 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998470 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998516 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998538 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998481 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998579 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998589 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998606 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.998583824 +0000 UTC m=+17.325646370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998636 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998685 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998706 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.998511 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998641 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.998625065 +0000 UTC m=+17.325687631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.998845 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.998878 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998955 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.998927503 +0000 UTC m=+17.325990049 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998984 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.998998 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999021 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999058 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.999048537 +0000 UTC m=+17.326111113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.999083 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999096 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.999115 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999128 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999146 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: I1203 14:26:16.999160 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999212 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.999190171 +0000 UTC m=+17.326252727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999235 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999251 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999261 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999276 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999308 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999328 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999279 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999403 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999421 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999291 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.999281374 +0000 UTC m=+17.326343880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999568 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.999554441 +0000 UTC m=+17.326616947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.003156 master-0 kubenswrapper[4409]: E1203 14:26:16.999590 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:24.999584152 +0000 UTC m=+17.326646658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.309655 master-0 kubenswrapper[4409]: I1203 14:26:17.309615 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:17.309925 master-0 kubenswrapper[4409]: E1203 14:26:17.309857 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:17.309980 master-0 kubenswrapper[4409]: E1203 14:26:17.309962 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.309980 master-0 kubenswrapper[4409]: E1203 14:26:17.309978 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.310124 master-0 kubenswrapper[4409]: E1203 14:26:17.310064 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.310042506 +0000 UTC m=+17.637105012 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.310217 master-0 kubenswrapper[4409]: I1203 14:26:17.309889 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:17.310406 master-0 kubenswrapper[4409]: E1203 14:26:17.310353 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:17.310451 master-0 kubenswrapper[4409]: E1203 14:26:17.310414 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.310451 master-0 kubenswrapper[4409]: E1203 14:26:17.310440 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.310605 master-0 kubenswrapper[4409]: I1203 14:26:17.310586 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:17.310736 master-0 kubenswrapper[4409]: E1203 14:26:17.310708 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.310678784 +0000 UTC m=+17.637741320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.310909 master-0 kubenswrapper[4409]: I1203 14:26:17.310873 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:17.311047 master-0 kubenswrapper[4409]: E1203 14:26:17.310889 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:17.311116 master-0 kubenswrapper[4409]: I1203 14:26:17.311043 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:17.311116 master-0 kubenswrapper[4409]: E1203 14:26:17.311062 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.311116 master-0 kubenswrapper[4409]: E1203 14:26:17.311084 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311243 master-0 kubenswrapper[4409]: E1203 14:26:17.311165 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.311144668 +0000 UTC m=+17.638207264 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311243 master-0 kubenswrapper[4409]: E1203 14:26:17.311170 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:17.311243 master-0 kubenswrapper[4409]: E1203 14:26:17.311187 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.311243 master-0 kubenswrapper[4409]: E1203 14:26:17.311194 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311243 master-0 kubenswrapper[4409]: E1203 14:26:17.311223 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.31121577 +0000 UTC m=+17.638278276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311457 master-0 kubenswrapper[4409]: I1203 14:26:17.311298 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:17.311457 master-0 kubenswrapper[4409]: I1203 14:26:17.311345 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:17.311457 master-0 kubenswrapper[4409]: I1203 14:26:17.311386 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:17.311613 master-0 kubenswrapper[4409]: E1203 14:26:17.311517 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:17.311613 master-0 kubenswrapper[4409]: E1203 14:26:17.311537 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.311613 master-0 kubenswrapper[4409]: E1203 14:26:17.311547 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311613 master-0 kubenswrapper[4409]: E1203 14:26:17.311581 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.31157118 +0000 UTC m=+17.638633686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311626 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311652 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311660 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311670 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311704 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311724 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311848 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.311837247 +0000 UTC m=+17.638899753 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.311872 master-0 kubenswrapper[4409]: E1203 14:26:17.311864 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.311858738 +0000 UTC m=+17.638921244 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.312252 master-0 kubenswrapper[4409]: E1203 14:26:17.311886 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:17.312252 master-0 kubenswrapper[4409]: E1203 14:26:17.311904 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.312252 master-0 kubenswrapper[4409]: E1203 14:26:17.311914 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.312252 master-0 kubenswrapper[4409]: E1203 14:26:17.312036 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.312022542 +0000 UTC m=+17.639085148 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.415709 master-0 kubenswrapper[4409]: I1203 14:26:17.415324 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:17.415709 master-0 kubenswrapper[4409]: E1203 14:26:17.415681 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:17.415709 master-0 kubenswrapper[4409]: I1203 14:26:17.415699 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415722 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415748 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415842 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.415816686 +0000 UTC m=+17.742879232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415911 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415937 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.415954 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: I1203 14:26:17.415988 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.416082 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.416060843 +0000 UTC m=+17.743123379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416115 master-0 kubenswrapper[4409]: E1203 14:26:17.416133 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:17.416792 master-0 kubenswrapper[4409]: E1203 14:26:17.416156 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.416792 master-0 kubenswrapper[4409]: E1203 14:26:17.416172 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416792 master-0 kubenswrapper[4409]: E1203 14:26:17.416221 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.416204967 +0000 UTC m=+17.743267513 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.416792 master-0 kubenswrapper[4409]: I1203 14:26:17.416171 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:17.416792 master-0 kubenswrapper[4409]: E1203 14:26:17.416294 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:17.417235 master-0 kubenswrapper[4409]: E1203 14:26:17.416863 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.417235 master-0 kubenswrapper[4409]: E1203 14:26:17.416917 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.417235 master-0 kubenswrapper[4409]: E1203 14:26:17.417196 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.417137793 +0000 UTC m=+17.744200469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.417530 master-0 kubenswrapper[4409]: I1203 14:26:17.417391 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:17.417725 master-0 kubenswrapper[4409]: E1203 14:26:17.417657 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:17.417725 master-0 kubenswrapper[4409]: E1203 14:26:17.417709 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.417725 master-0 kubenswrapper[4409]: E1203 14:26:17.417728 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.418982 master-0 kubenswrapper[4409]: E1203 14:26:17.418928 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.418899503 +0000 UTC m=+17.745962229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.419888 master-0 kubenswrapper[4409]: I1203 14:26:17.419802 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:17.419888 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:17.419888 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:17.419888 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:17.420259 master-0 kubenswrapper[4409]: I1203 14:26:17.419898 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:17.732147 master-0 kubenswrapper[4409]: I1203 14:26:17.731955 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:17.732147 master-0 kubenswrapper[4409]: I1203 14:26:17.732113 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:17.732505 master-0 kubenswrapper[4409]: E1203 14:26:17.732254 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:17.732505 master-0 kubenswrapper[4409]: E1203 14:26:17.732313 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.732505 master-0 kubenswrapper[4409]: E1203 14:26:17.732340 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.732505 master-0 kubenswrapper[4409]: E1203 14:26:17.732377 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:17.732505 master-0 kubenswrapper[4409]: E1203 14:26:17.732455 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:17.732817 master-0 kubenswrapper[4409]: E1203 14:26:17.732525 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.732817 master-0 kubenswrapper[4409]: E1203 14:26:17.732459 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.732425534 +0000 UTC m=+18.059488070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.733367 master-0 kubenswrapper[4409]: I1203 14:26:17.733301 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:17.733367 master-0 kubenswrapper[4409]: E1203 14:26:17.733327 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.733292578 +0000 UTC m=+18.060355144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:17.733580 master-0 kubenswrapper[4409]: E1203 14:26:17.733463 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:17.733580 master-0 kubenswrapper[4409]: E1203 14:26:17.733495 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:17.733580 master-0 kubenswrapper[4409]: E1203 14:26:17.733571 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.733548096 +0000 UTC m=+18.060610712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:17.734145 master-0 kubenswrapper[4409]: I1203 14:26:17.734098 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:17.734390 master-0 kubenswrapper[4409]: E1203 14:26:17.734343 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:17.734390 master-0 kubenswrapper[4409]: E1203 14:26:17.734384 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:17.734593 master-0 kubenswrapper[4409]: E1203 14:26:17.734458 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:25.734435981 +0000 UTC m=+18.061498537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:17.830868 master-0 kubenswrapper[4409]: I1203 14:26:17.830815 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:17.831271 master-0 kubenswrapper[4409]: I1203 14:26:17.831234 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:17.836267 master-0 kubenswrapper[4409]: I1203 14:26:17.836157 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:26:18.001989 master-0 kubenswrapper[4409]: I1203 14:26:18.001930 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" event={"ID":"77430348-b53a-4898-8047-be8bb542a0a7","Type":"ContainerStarted","Data":"2d14509560ee1bd48a58ecf7c7bf4100362a0b75c25b282d877f4596448fc483"} Dec 03 14:26:18.003115 master-0 kubenswrapper[4409]: I1203 14:26:18.002510 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:18.006613 master-0 kubenswrapper[4409]: I1203 14:26:18.006495 4409 generic.go:334] "Generic (PLEG): container finished" podID="19c2a40b-213c-42f1-9459-87c2e780a75f" containerID="cef0ad33b092191cb3aaafb80ec4aa146b398fc5ccdc1f97e543466c6adc98ef" exitCode=0 Dec 03 14:26:18.006960 master-0 kubenswrapper[4409]: I1203 14:26:18.006879 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerDied","Data":"cef0ad33b092191cb3aaafb80ec4aa146b398fc5ccdc1f97e543466c6adc98ef"} Dec 03 14:26:18.023105 master-0 kubenswrapper[4409]: E1203 14:26:18.023041 4409 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:18.046390 master-0 kubenswrapper[4409]: I1203 14:26:18.045888 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:18.251987 master-0 kubenswrapper[4409]: I1203 14:26:18.251936 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:26:18.416509 master-0 kubenswrapper[4409]: I1203 14:26:18.416456 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:18.416509 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:18.416509 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:18.416509 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:18.416668 master-0 kubenswrapper[4409]: I1203 14:26:18.416534 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:18.443069 master-0 kubenswrapper[4409]: I1203 14:26:18.442945 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:18.448769 master-0 kubenswrapper[4409]: I1203 14:26:18.448709 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:18.569312 master-0 kubenswrapper[4409]: I1203 14:26:18.569230 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:18.591443 master-0 kubenswrapper[4409]: I1203 14:26:18.591412 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815297 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815366 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815429 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815432 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815462 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815472 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815525 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815542 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815557 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815575 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815617 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815642 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815661 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815681 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815725 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815747 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815749 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815760 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815800 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815806 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815849 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815874 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815889 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815911 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815917 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.815966 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815971 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.815989 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816024 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816041 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816058 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: E1203 14:26:18.816096 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816136 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816137 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:18.816135 master-0 kubenswrapper[4409]: I1203 14:26:18.816167 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816209 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816226 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.816264 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816284 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816307 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816314 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816336 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816350 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816370 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816381 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.816429 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816444 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816452 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816460 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816486 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816473 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816501 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816393 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816509 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816523 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816566 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816577 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816560 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816587 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816592 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816562 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.816545 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816608 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816616 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816637 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816579 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816174 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816764 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.816755 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816777 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816793 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816818 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816833 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816846 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816837 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816870 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816852 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816951 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.816965 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816977 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: I1203 14:26:18.816989 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817061 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817168 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817223 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817271 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817442 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:18.817556 master-0 kubenswrapper[4409]: E1203 14:26:18.817535 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817627 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817681 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817750 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817840 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817916 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.817978 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818056 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818116 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818196 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818268 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818355 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818466 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818531 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818611 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818686 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818761 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818867 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:18.819066 master-0 kubenswrapper[4409]: E1203 14:26:18.818958 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819185 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819267 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819325 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819436 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819512 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819553 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:18.819606 master-0 kubenswrapper[4409]: E1203 14:26:18.819605 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:18.819909 master-0 kubenswrapper[4409]: E1203 14:26:18.819674 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:18.819909 master-0 kubenswrapper[4409]: E1203 14:26:18.819769 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:18.819909 master-0 kubenswrapper[4409]: E1203 14:26:18.819890 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:18.819995 master-0 kubenswrapper[4409]: E1203 14:26:18.819970 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:18.820139 master-0 kubenswrapper[4409]: E1203 14:26:18.820109 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:18.820223 master-0 kubenswrapper[4409]: E1203 14:26:18.820199 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:18.820302 master-0 kubenswrapper[4409]: E1203 14:26:18.820277 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:18.820400 master-0 kubenswrapper[4409]: E1203 14:26:18.820371 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:18.820511 master-0 kubenswrapper[4409]: E1203 14:26:18.820492 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:18.820583 master-0 kubenswrapper[4409]: E1203 14:26:18.820565 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:18.820659 master-0 kubenswrapper[4409]: E1203 14:26:18.820641 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:18.820763 master-0 kubenswrapper[4409]: E1203 14:26:18.820745 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:18.820839 master-0 kubenswrapper[4409]: E1203 14:26:18.820819 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:18.821103 master-0 kubenswrapper[4409]: E1203 14:26:18.821077 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:18.821160 master-0 kubenswrapper[4409]: E1203 14:26:18.821136 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:18.821229 master-0 kubenswrapper[4409]: E1203 14:26:18.821202 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:18.821328 master-0 kubenswrapper[4409]: E1203 14:26:18.821300 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:18.821435 master-0 kubenswrapper[4409]: E1203 14:26:18.821408 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:18.821532 master-0 kubenswrapper[4409]: E1203 14:26:18.821510 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:18.821607 master-0 kubenswrapper[4409]: E1203 14:26:18.821589 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:18.821664 master-0 kubenswrapper[4409]: E1203 14:26:18.821644 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:18.821903 master-0 kubenswrapper[4409]: E1203 14:26:18.821864 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:18.821988 master-0 kubenswrapper[4409]: E1203 14:26:18.821962 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:18.822112 master-0 kubenswrapper[4409]: E1203 14:26:18.822082 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:19.014032 master-0 kubenswrapper[4409]: I1203 14:26:19.013671 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:19.014530 master-0 kubenswrapper[4409]: I1203 14:26:19.013955 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-42hmk" event={"ID":"19c2a40b-213c-42f1-9459-87c2e780a75f","Type":"ContainerStarted","Data":"0a3a7c8ad01663cafc5f65f43d0c9de9bf31d2593111c10ee23a150653bd35fd"} Dec 03 14:26:19.017764 master-0 kubenswrapper[4409]: I1203 14:26:19.017726 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:26:19.249654 master-0 kubenswrapper[4409]: I1203 14:26:19.249525 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Dec 03 14:26:19.249654 master-0 kubenswrapper[4409]: I1203 14:26:19.249569 4409 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-03T14:26:19Z","lastTransitionTime":"2025-12-03T14:26:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 03 14:26:19.416575 master-0 kubenswrapper[4409]: I1203 14:26:19.416499 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:19.416575 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:19.416575 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:19.416575 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:19.416867 master-0 kubenswrapper[4409]: I1203 14:26:19.416599 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:20.018077 master-0 kubenswrapper[4409]: I1203 14:26:20.017963 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:20.416268 master-0 kubenswrapper[4409]: I1203 14:26:20.416177 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:20.416268 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:20.416268 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:20.416268 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:20.416268 master-0 kubenswrapper[4409]: I1203 14:26:20.416250 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:20.815244 master-0 kubenswrapper[4409]: I1203 14:26:20.815172 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:20.815244 master-0 kubenswrapper[4409]: I1203 14:26:20.815233 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: I1203 14:26:20.815228 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: I1203 14:26:20.815195 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: I1203 14:26:20.815312 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: E1203 14:26:20.815485 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: I1203 14:26:20.815542 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:20.815555 master-0 kubenswrapper[4409]: I1203 14:26:20.815543 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815566 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815596 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815609 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815625 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815655 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815654 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:20.815725 master-0 kubenswrapper[4409]: I1203 14:26:20.815664 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815717 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815639 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815673 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815810 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815697 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:20.815923 master-0 kubenswrapper[4409]: I1203 14:26:20.815680 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: E1203 14:26:20.815825 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815949 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815863 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815976 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.816060 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815903 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815905 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815846 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: E1203 14:26:20.816112 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815946 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:20.816142 master-0 kubenswrapper[4409]: I1203 14:26:20.815878 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.815897 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816171 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816203 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816277 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816280 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816325 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816177 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816220 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816382 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.815986 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816408 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: E1203 14:26:20.816318 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816356 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816374 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816375 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:20.816481 master-0 kubenswrapper[4409]: I1203 14:26:20.816493 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816510 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816336 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816548 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816282 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816572 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816191 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816608 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816635 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816439 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816581 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816601 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816519 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: E1203 14:26:20.816777 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816796 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816625 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: E1203 14:26:20.816533 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816819 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816647 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816520 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816857 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816863 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:20.816938 master-0 kubenswrapper[4409]: I1203 14:26:20.816874 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: E1203 14:26:20.817092 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: I1203 14:26:20.817166 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: I1203 14:26:20.817177 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: I1203 14:26:20.817201 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: I1203 14:26:20.817213 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: E1203 14:26:20.817310 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: E1203 14:26:20.817436 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: E1203 14:26:20.817527 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:20.817638 master-0 kubenswrapper[4409]: E1203 14:26:20.817604 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:20.817890 master-0 kubenswrapper[4409]: E1203 14:26:20.817687 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:20.817890 master-0 kubenswrapper[4409]: E1203 14:26:20.817791 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:20.817956 master-0 kubenswrapper[4409]: E1203 14:26:20.817888 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:20.818322 master-0 kubenswrapper[4409]: E1203 14:26:20.817995 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:20.818322 master-0 kubenswrapper[4409]: E1203 14:26:20.818238 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:20.818407 master-0 kubenswrapper[4409]: E1203 14:26:20.818348 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:20.818530 master-0 kubenswrapper[4409]: E1203 14:26:20.818486 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:20.818654 master-0 kubenswrapper[4409]: E1203 14:26:20.818605 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:20.818708 master-0 kubenswrapper[4409]: E1203 14:26:20.818665 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:20.818845 master-0 kubenswrapper[4409]: E1203 14:26:20.818809 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:20.819078 master-0 kubenswrapper[4409]: E1203 14:26:20.819044 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:20.819409 master-0 kubenswrapper[4409]: E1203 14:26:20.819226 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:20.819409 master-0 kubenswrapper[4409]: E1203 14:26:20.819376 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:20.819505 master-0 kubenswrapper[4409]: E1203 14:26:20.819455 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:20.819776 master-0 kubenswrapper[4409]: E1203 14:26:20.819623 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:20.819776 master-0 kubenswrapper[4409]: E1203 14:26:20.819740 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:20.819897 master-0 kubenswrapper[4409]: E1203 14:26:20.819857 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:20.819971 master-0 kubenswrapper[4409]: E1203 14:26:20.819942 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:20.820076 master-0 kubenswrapper[4409]: E1203 14:26:20.820041 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:20.820263 master-0 kubenswrapper[4409]: E1203 14:26:20.820187 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:20.820364 master-0 kubenswrapper[4409]: E1203 14:26:20.820317 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:20.820451 master-0 kubenswrapper[4409]: E1203 14:26:20.820416 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:20.820614 master-0 kubenswrapper[4409]: E1203 14:26:20.820570 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:20.820697 master-0 kubenswrapper[4409]: E1203 14:26:20.820660 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:20.820860 master-0 kubenswrapper[4409]: E1203 14:26:20.820831 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:20.821230 master-0 kubenswrapper[4409]: E1203 14:26:20.820998 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:20.821230 master-0 kubenswrapper[4409]: E1203 14:26:20.821199 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:20.821335 master-0 kubenswrapper[4409]: E1203 14:26:20.821271 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:20.821470 master-0 kubenswrapper[4409]: E1203 14:26:20.821359 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:20.821470 master-0 kubenswrapper[4409]: E1203 14:26:20.821443 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:20.821552 master-0 kubenswrapper[4409]: E1203 14:26:20.821492 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:20.821638 master-0 kubenswrapper[4409]: E1203 14:26:20.821607 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:20.821739 master-0 kubenswrapper[4409]: E1203 14:26:20.821717 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:20.821869 master-0 kubenswrapper[4409]: E1203 14:26:20.821847 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:20.821969 master-0 kubenswrapper[4409]: E1203 14:26:20.821947 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:20.822127 master-0 kubenswrapper[4409]: E1203 14:26:20.822104 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:20.822233 master-0 kubenswrapper[4409]: E1203 14:26:20.822205 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:20.822739 master-0 kubenswrapper[4409]: E1203 14:26:20.822346 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:20.822739 master-0 kubenswrapper[4409]: E1203 14:26:20.822438 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:20.822739 master-0 kubenswrapper[4409]: E1203 14:26:20.822541 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:20.822739 master-0 kubenswrapper[4409]: E1203 14:26:20.822637 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:20.822739 master-0 kubenswrapper[4409]: E1203 14:26:20.822735 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:20.823354 master-0 kubenswrapper[4409]: E1203 14:26:20.822845 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:20.823354 master-0 kubenswrapper[4409]: E1203 14:26:20.822937 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:20.823354 master-0 kubenswrapper[4409]: E1203 14:26:20.823049 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:20.823354 master-0 kubenswrapper[4409]: E1203 14:26:20.823274 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:20.823747 master-0 kubenswrapper[4409]: E1203 14:26:20.823460 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:20.823747 master-0 kubenswrapper[4409]: E1203 14:26:20.823515 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:20.823747 master-0 kubenswrapper[4409]: E1203 14:26:20.823677 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:20.823973 master-0 kubenswrapper[4409]: E1203 14:26:20.823830 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:20.823973 master-0 kubenswrapper[4409]: E1203 14:26:20.823872 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:20.823973 master-0 kubenswrapper[4409]: E1203 14:26:20.823943 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:20.824304 master-0 kubenswrapper[4409]: E1203 14:26:20.824147 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:20.824630 master-0 kubenswrapper[4409]: E1203 14:26:20.824555 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:20.824745 master-0 kubenswrapper[4409]: E1203 14:26:20.824693 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:20.824839 master-0 kubenswrapper[4409]: E1203 14:26:20.824802 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:20.825105 master-0 kubenswrapper[4409]: E1203 14:26:20.824991 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:21.416471 master-0 kubenswrapper[4409]: I1203 14:26:21.416378 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:21.416471 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:21.416471 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:21.416471 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:21.417503 master-0 kubenswrapper[4409]: I1203 14:26:21.416478 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:22.416199 master-0 kubenswrapper[4409]: I1203 14:26:22.416112 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:22.416199 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:22.416199 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:22.416199 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:22.416199 master-0 kubenswrapper[4409]: I1203 14:26:22.416181 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:22.814925 master-0 kubenswrapper[4409]: I1203 14:26:22.814870 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:22.814925 master-0 kubenswrapper[4409]: I1203 14:26:22.814910 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:22.814925 master-0 kubenswrapper[4409]: I1203 14:26:22.814906 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.814938 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.814986 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815057 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815082 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815101 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.814987 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815081 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815105 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815145 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815069 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: E1203 14:26:22.815063 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:22.815258 master-0 kubenswrapper[4409]: I1203 14:26:22.815221 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815418 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815429 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815448 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815456 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815449 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: E1203 14:26:22.815448 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815481 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815492 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815499 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815487 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815495 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815552 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815561 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815564 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815469 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815514 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815585 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815594 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815600 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815547 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815614 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815557 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815566 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:22.815624 master-0 kubenswrapper[4409]: I1203 14:26:22.815651 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815536 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815539 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815611 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815622 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815732 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815634 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815760 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815778 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815799 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.815694 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815817 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815762 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815829 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815839 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815780 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815801 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815536 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815868 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815885 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815815 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815629 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815530 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815920 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815867 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.815883 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815899 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815844 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815924 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815928 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815525 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.816050 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.816062 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: I1203 14:26:22.815802 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.816042 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.816190 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.816291 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:22.816410 master-0 kubenswrapper[4409]: E1203 14:26:22.816386 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816479 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816544 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816620 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816688 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816792 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816872 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.816947 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.817045 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.817126 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.817238 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.817448 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:22.817597 master-0 kubenswrapper[4409]: E1203 14:26:22.817579 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:22.817956 master-0 kubenswrapper[4409]: E1203 14:26:22.817659 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:22.817956 master-0 kubenswrapper[4409]: E1203 14:26:22.817731 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:22.817956 master-0 kubenswrapper[4409]: E1203 14:26:22.817792 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:22.817956 master-0 kubenswrapper[4409]: E1203 14:26:22.817873 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:22.818096 master-0 kubenswrapper[4409]: E1203 14:26:22.817969 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:22.818096 master-0 kubenswrapper[4409]: E1203 14:26:22.818048 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:22.818200 master-0 kubenswrapper[4409]: E1203 14:26:22.818168 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:22.818257 master-0 kubenswrapper[4409]: E1203 14:26:22.818232 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:22.818338 master-0 kubenswrapper[4409]: E1203 14:26:22.818305 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:22.818401 master-0 kubenswrapper[4409]: E1203 14:26:22.818380 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:22.818460 master-0 kubenswrapper[4409]: E1203 14:26:22.818441 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:22.818550 master-0 kubenswrapper[4409]: E1203 14:26:22.818521 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:22.818660 master-0 kubenswrapper[4409]: E1203 14:26:22.818631 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:22.818745 master-0 kubenswrapper[4409]: E1203 14:26:22.818724 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:22.818805 master-0 kubenswrapper[4409]: E1203 14:26:22.818781 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:22.819027 master-0 kubenswrapper[4409]: E1203 14:26:22.818978 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:22.819068 master-0 kubenswrapper[4409]: E1203 14:26:22.819042 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:22.819122 master-0 kubenswrapper[4409]: E1203 14:26:22.819104 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:22.819192 master-0 kubenswrapper[4409]: E1203 14:26:22.819172 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:22.819261 master-0 kubenswrapper[4409]: E1203 14:26:22.819239 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:22.819337 master-0 kubenswrapper[4409]: E1203 14:26:22.819323 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:22.819452 master-0 kubenswrapper[4409]: E1203 14:26:22.819424 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:22.819508 master-0 kubenswrapper[4409]: E1203 14:26:22.819483 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:22.819600 master-0 kubenswrapper[4409]: E1203 14:26:22.819577 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:22.819687 master-0 kubenswrapper[4409]: E1203 14:26:22.819662 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:22.819756 master-0 kubenswrapper[4409]: E1203 14:26:22.819732 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:22.819837 master-0 kubenswrapper[4409]: E1203 14:26:22.819813 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:22.819916 master-0 kubenswrapper[4409]: E1203 14:26:22.819896 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:22.819962 master-0 kubenswrapper[4409]: E1203 14:26:22.819938 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:22.820045 master-0 kubenswrapper[4409]: E1203 14:26:22.820023 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:22.820087 master-0 kubenswrapper[4409]: E1203 14:26:22.820064 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:22.820179 master-0 kubenswrapper[4409]: E1203 14:26:22.820161 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:22.820226 master-0 kubenswrapper[4409]: E1203 14:26:22.820208 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:22.820306 master-0 kubenswrapper[4409]: E1203 14:26:22.820280 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:22.820350 master-0 kubenswrapper[4409]: E1203 14:26:22.820335 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:22.820433 master-0 kubenswrapper[4409]: E1203 14:26:22.820406 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:22.820487 master-0 kubenswrapper[4409]: E1203 14:26:22.820466 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:22.820570 master-0 kubenswrapper[4409]: E1203 14:26:22.820549 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:22.820614 master-0 kubenswrapper[4409]: E1203 14:26:22.820590 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:22.820680 master-0 kubenswrapper[4409]: E1203 14:26:22.820662 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:22.820777 master-0 kubenswrapper[4409]: E1203 14:26:22.820752 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:22.820843 master-0 kubenswrapper[4409]: E1203 14:26:22.820819 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:22.820901 master-0 kubenswrapper[4409]: E1203 14:26:22.820876 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:22.820959 master-0 kubenswrapper[4409]: E1203 14:26:22.820938 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:22.821064 master-0 kubenswrapper[4409]: E1203 14:26:22.821039 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:22.821233 master-0 kubenswrapper[4409]: E1203 14:26:22.821183 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:22.821299 master-0 kubenswrapper[4409]: E1203 14:26:22.821277 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:22.821382 master-0 kubenswrapper[4409]: E1203 14:26:22.821360 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:23.024080 master-0 kubenswrapper[4409]: E1203 14:26:23.023985 4409 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:23.416833 master-0 kubenswrapper[4409]: I1203 14:26:23.416682 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:23.416833 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:23.416833 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:23.416833 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:23.418137 master-0 kubenswrapper[4409]: I1203 14:26:23.416858 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:24.415926 master-0 kubenswrapper[4409]: I1203 14:26:24.415837 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:24.415926 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:24.415926 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:24.415926 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:24.416448 master-0 kubenswrapper[4409]: I1203 14:26:24.415953 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740610 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740720 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740753 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740783 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740804 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740879 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:24.740866 master-0 kubenswrapper[4409]: I1203 14:26:24.740902 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.740924 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.740941 4409 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.740986 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.740998 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741025 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741090 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741065368 +0000 UTC m=+33.068127874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.740964 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741124 4409 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741144 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741118869 +0000 UTC m=+33.068181415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741174 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741208 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741200921 +0000 UTC m=+33.068263427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741217 4409 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741243 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741255 4409 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741259 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741239393 +0000 UTC m=+33.068301899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741291 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741281294 +0000 UTC m=+33.068343800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-oauth-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741284 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741310 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741302644 +0000 UTC m=+33.068365150 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741321 4409 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741363 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741380 4409 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741403 4409 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741416 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741405667 +0000 UTC m=+33.068468383 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741412 4409 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741331 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741332 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741437 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741427988 +0000 UTC m=+33.068490744 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741426 4409 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741598 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741565572 +0000 UTC m=+33.068628228 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741619 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741611923 +0000 UTC m=+33.068674429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"federate-client-certs" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741453 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741651 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741643514 +0000 UTC m=+33.068706020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741676 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741778 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741777 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741738187 +0000 UTC m=+33.068800813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741820 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741804199 +0000 UTC m=+33.068866945 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741873 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741896 4409 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.741902 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741894381 +0000 UTC m=+33.068956887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741926 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.741964 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742022 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.741996744 +0000 UTC m=+33.069059300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742035 4409 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742048 4409 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742067 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742057446 +0000 UTC m=+33.069120032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742032 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742109 4409 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742119 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742079926 +0000 UTC m=+33.069142522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"oauth-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742174 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742164049 +0000 UTC m=+33.069226745 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-cabundle" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742168 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742215 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742227 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742285 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742274882 +0000 UTC m=+33.069337448 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742309 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742315 4409 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742350 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742341054 +0000 UTC m=+33.069403620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742370 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742361064 +0000 UTC m=+33.069423670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742249 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742442 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742478 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742511 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742534 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742544 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742561 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.74255507 +0000 UTC m=+33.069617576 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"console-operator-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742535 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742580 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742582 4409 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742619 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742626 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742591 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742586541 +0000 UTC m=+33.069649047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742660 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742691 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742719 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742706144 +0000 UTC m=+33.069768830 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742744 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742733855 +0000 UTC m=+33.069796361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742761 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742765 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742754675 +0000 UTC m=+33.069817401 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-session" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742778 4409 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: I1203 14:26:24.742803 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742820 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:24.742652 master-0 kubenswrapper[4409]: E1203 14:26:24.742828 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.742816827 +0000 UTC m=+33.069879393 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.742862 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.742965 4409 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744077 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744122 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744107854 +0000 UTC m=+33.071170350 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744149 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744142775 +0000 UTC m=+33.071205281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744163 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744168 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744161705 +0000 UTC m=+33.071224211 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744205 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744194616 +0000 UTC m=+33.071257202 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744233 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744279 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744332 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744364 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744355451 +0000 UTC m=+33.071417957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744331 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744383 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744435 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744420 4409 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744468 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744497 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744484624 +0000 UTC m=+33.071547341 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744516 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744507565 +0000 UTC m=+33.071570181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"telemetry-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744519 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744535 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744574 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744564107 +0000 UTC m=+33.071626793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744612 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744598838 +0000 UTC m=+33.071661524 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744645 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744682 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744708 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744740 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744732032 +0000 UTC m=+33.071794538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744708 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744759 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744790 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744783313 +0000 UTC m=+33.071845819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744808 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744826 4409 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744899 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744876606 +0000 UTC m=+33.071939152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744907 4409 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744922 4409 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744952 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.744943058 +0000 UTC m=+33.072005644 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.744973 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.744979 4409 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745088 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745125 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert podName:b02244d0-f4ef-4702-950d-9e3fb5ced128 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745107572 +0000 UTC m=+33.072170198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert") pod "monitoring-plugin-547cc9cc49-kqs4k" (UID: "b02244d0-f4ef-4702-950d-9e3fb5ced128") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745134 4409 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745170 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745182 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745207 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745198515 +0000 UTC m=+33.072261221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745223 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745260 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745272 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745282 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745303 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745291477 +0000 UTC m=+33.072353983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745323 4409 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745328 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745606 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.74538439 +0000 UTC m=+33.072446896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745625 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745678 4409 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745694 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745707 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745699459 +0000 UTC m=+33.072761965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745756 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.74575008 +0000 UTC m=+33.072812586 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745732 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745769 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745764081 +0000 UTC m=+33.072826587 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"openshift-global-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745837 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745869 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745888 4409 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745890 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745881674 +0000 UTC m=+33.072944180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745940 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745967 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745956926 +0000 UTC m=+33.073019432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.745977 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746021 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.745997997 +0000 UTC m=+33.073060503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.745923 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746043 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746031948 +0000 UTC m=+33.073094454 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : object "openshift-catalogd"/"catalogserver-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746072 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746119 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746161 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746171 4409 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746200 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746213 4409 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746226 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746240 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746232854 +0000 UTC m=+33.073295360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746313 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746305186 +0000 UTC m=+33.073367692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"service-ca" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746250 4409 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746338 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746369 4409 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746278 4409 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746414 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746434 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.74642647 +0000 UTC m=+33.073488976 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746343 4409 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746459 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746464 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746474 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746462331 +0000 UTC m=+33.073524907 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : object "openshift-service-ca"/"signing-key" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746488 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746482271 +0000 UTC m=+33.073544767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746497 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746506 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746517 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746511282 +0000 UTC m=+33.073573788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"machine-api-operator-images" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746534 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746525252 +0000 UTC m=+33.073587758 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : object "openshift-dns-operator"/"metrics-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746561 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746573 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746580 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746569624 +0000 UTC m=+33.073632330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746609 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746615 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746633 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746626755 +0000 UTC m=+33.073689251 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746641 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746655 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746671 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746660406 +0000 UTC m=+33.073723082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746698 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746705 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746697697 +0000 UTC m=+33.073760203 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746738 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746748 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746762 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls podName:8c6fa89f-268c-477b-9f04-238d2305cc89 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746756209 +0000 UTC m=+33.073818715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls") pod "machine-config-controller-74cddd4fb5-phk6r" (UID: "8c6fa89f-268c-477b-9f04-238d2305cc89") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746722 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746781 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746788 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.74678378 +0000 UTC m=+33.073846286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746822 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746834 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746859 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746851422 +0000 UTC m=+33.073913918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-tls" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746886 4409 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: E1203 14:26:24.746799 4409 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:24.748147 master-0 kubenswrapper[4409]: I1203 14:26:24.746907 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.746917 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746908503 +0000 UTC m=+33.073971189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.746933 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746925774 +0000 UTC m=+33.073988470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.746947 4409 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.746989 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.746983175 +0000 UTC m=+33.074045681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.746953 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747019 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747025 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747046 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747036237 +0000 UTC m=+33.074098963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747070 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747088 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747082968 +0000 UTC m=+33.074145474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747066 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747104 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747127 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747119799 +0000 UTC m=+33.074182305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747132 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747212 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747244 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747234683 +0000 UTC m=+33.074297399 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747214 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747268 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747290 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747330 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747318065 +0000 UTC m=+33.074380761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747346 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747461 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747453659 +0000 UTC m=+33.074516165 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"config" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747735 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747771 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747819 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747852 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747871 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747882 4409 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.747894 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747923 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747921 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.747909732 +0000 UTC m=+33.074972428 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"trusted-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747958 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747999 4409 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748049 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.748038845 +0000 UTC m=+33.075101551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.747962 4409 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748068 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748028 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748071 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.748060346 +0000 UTC m=+33.075122852 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748100 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.748091717 +0000 UTC m=+33.075154223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748112 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca podName:44af6af5-cecb-4dc4-b793-e8e350f8a47d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.748106707 +0000 UTC m=+33.075169213 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca") pod "cluster-image-registry-operator-65dc4bcb88-96zcz" (UID: "44af6af5-cecb-4dc4-b793-e8e350f8a47d") : object "openshift-image-registry"/"trusted-ca" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.748121 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.748117138 +0000 UTC m=+33.075179644 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748136 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748168 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748188 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748207 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748228 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748248 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748267 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748300 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748320 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748340 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748368 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748397 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748415 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748454 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748474 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748503 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748548 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748577 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748597 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748615 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748634 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748660 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748705 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748734 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748752 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748782 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748801 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748822 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748856 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748893 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748920 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748946 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.748972 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749018 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749047 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749074 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749122 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749174 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749195 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749216 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749238 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749261 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749283 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749303 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749336 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749354 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749378 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749401 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749420 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749440 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749467 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749503 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749522 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749542 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749561 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749583 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749610 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749635 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749655 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749674 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749696 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749736 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749759 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: I1203 14:26:24.749788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.749912 4409 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.749946 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.749937379 +0000 UTC m=+33.076999885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.749979 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750021 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0 podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.749996061 +0000 UTC m=+33.077058567 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750068 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750093 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750084953 +0000 UTC m=+33.077147459 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750118 4409 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750142 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750134525 +0000 UTC m=+33.077197031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750184 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750210 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750202287 +0000 UTC m=+33.077264793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750239 4409 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750260 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750253578 +0000 UTC m=+33.077316084 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750296 4409 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750317 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.75031166 +0000 UTC m=+33.077374166 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"serving-cert" not registered Dec 03 14:26:24.758706 master-0 kubenswrapper[4409]: E1203 14:26:24.750343 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750363 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750356571 +0000 UTC m=+33.077419077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750433 4409 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750461 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750451964 +0000 UTC m=+33.077514470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750502 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750527 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates podName:9e0a2889-39a5-471e-bd46-958e2f8eacaa nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750518926 +0000 UTC m=+33.077581432 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates") pod "prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" (UID: "9e0a2889-39a5-471e-bd46-958e2f8eacaa") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750561 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750596 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750589128 +0000 UTC m=+33.077651634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750628 4409 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750685 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.75067678 +0000 UTC m=+33.077739286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"audit" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750718 4409 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750741 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750734242 +0000 UTC m=+33.077796748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : object "openshift-console-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750783 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-33kamir7f7ukf: object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750810 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750802574 +0000 UTC m=+33.077865180 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750850 4409 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750873 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750866066 +0000 UTC m=+33.077928572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750903 4409 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750927 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750919927 +0000 UTC m=+33.077982433 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750969 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.750996 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.750988879 +0000 UTC m=+33.078051385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751063 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751094 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751084732 +0000 UTC m=+33.078147248 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751138 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751164 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751157224 +0000 UTC m=+33.078219730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751196 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751219 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751211845 +0000 UTC m=+33.078274351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"audit-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751246 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751270 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751263017 +0000 UTC m=+33.078325523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-serving-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751313 4409 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751340 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751331129 +0000 UTC m=+33.078393635 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"encryption-config-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751381 4409 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751409 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert podName:b1b3ab29-77cf-48ac-8881-846c46bb9048 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751400931 +0000 UTC m=+33.078463517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert") pod "networking-console-plugin-7c696657b7-452tx" (UID: "b1b3ab29-77cf-48ac-8881-846c46bb9048") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751453 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751480 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751472913 +0000 UTC m=+33.078535429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751511 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751536 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751529074 +0000 UTC m=+33.078591580 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751577 4409 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751601 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751595016 +0000 UTC m=+33.078657522 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"metrics-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751642 4409 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751664 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751657588 +0000 UTC m=+33.078720194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751704 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2bc14vqi7sofg: object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751728 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.75171952 +0000 UTC m=+33.078782196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-2bc14vqi7sofg" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751757 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751780 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751773221 +0000 UTC m=+33.078835827 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"image-import-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751817 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751843 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751836103 +0000 UTC m=+33.078898609 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751899 4409 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751929 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.751920815 +0000 UTC m=+33.078983321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-client-certs" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.751994 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752053 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.752043959 +0000 UTC m=+33.079106465 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752126 4409 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752161 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.752148302 +0000 UTC m=+33.079210818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752202 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-56c9b9fa8d9gs: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752228 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.752220834 +0000 UTC m=+33.079283340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752260 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752284 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.752276186 +0000 UTC m=+33.079338692 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752327 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752351 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.752344167 +0000 UTC m=+33.079406673 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752382 4409 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753189 4409 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752910 4409 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752966 4409 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753259 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753301 4409 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752977 4409 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.752988 4409 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753025 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753025 4409 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753027 4409 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753044 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753056 4409 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753057 4409 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753079 4409 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753082 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-8ekn1l23o09kv: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753093 4409 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753099 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753101 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753124 4409 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753126 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753128 4409 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753133 4409 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753135 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753160 4409 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753166 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753170 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753176 4409 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753225 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753211242 +0000 UTC m=+33.080273838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : object "openshift-oauth-apiserver"/"audit-1" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753759 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753746257 +0000 UTC m=+33.080808763 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753772 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753765998 +0000 UTC m=+33.080828504 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"service-ca-bundle" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753271 4409 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753788 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs podName:38888547-ed48-4f96-810d-bcd04e49bd6b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753779888 +0000 UTC m=+33.080842394 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs") pod "multus-admission-controller-84c998f64f-8stq7" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b") : object "openshift-multus"/"multus-admission-controller-secret" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753944 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753914762 +0000 UTC m=+33.080977438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.753976 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753964873 +0000 UTC m=+33.081027589 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754023 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.753994554 +0000 UTC m=+33.081057280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default-metrics-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754050 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754041906 +0000 UTC m=+33.081104612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754075 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754066246 +0000 UTC m=+33.081128952 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754101 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754092867 +0000 UTC m=+33.081155593 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754129 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754116408 +0000 UTC m=+33.081179134 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754159 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754148159 +0000 UTC m=+33.081210885 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754186 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754178639 +0000 UTC m=+33.081241355 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : object "openshift-route-controller-manager"/"client-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754214 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls podName:04e9e2a5-cdc2-42af-ab2c-49525390be6d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.75420765 +0000 UTC m=+33.081270396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls") pod "cluster-node-tuning-operator-bbd9b9dff-rrfsm" (UID: "04e9e2a5-cdc2-42af-ab2c-49525390be6d") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754238 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754231361 +0000 UTC m=+33.081294077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-client" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754264 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754257192 +0000 UTC m=+33.081319918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754287 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754278902 +0000 UTC m=+33.081341608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754310 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754303093 +0000 UTC m=+33.081365819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754333 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls podName:aa169e84-880b-4e6d-aeee-7ebfa1f613d2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754325844 +0000 UTC m=+33.081388560 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls") pod "prometheus-operator-565bdcb8-477pk" (UID: "aa169e84-880b-4e6d-aeee-7ebfa1f613d2") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754360 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754348774 +0000 UTC m=+33.081411490 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754408 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754399036 +0000 UTC m=+33.081461742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754431 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754424016 +0000 UTC m=+33.081486762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754454 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls podName:4df2889c-99f7-402a-9d50-18ccf427179c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754447287 +0000 UTC m=+33.081510003 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls") pod "machine-config-operator-664c9d94c9-9vfr4" (UID: "4df2889c-99f7-402a-9d50-18ccf427179c") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754479 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754471448 +0000 UTC m=+33.081534164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754503 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754496188 +0000 UTC m=+33.081558914 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : object "openshift-ingress-operator"/"trusted-ca" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754519 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles podName:09b7b0c6-47cc-4860-8c78-9583bb5b0a6e nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754511719 +0000 UTC m=+33.081574445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles") pod "metrics-server-555496955b-vpcbs" (UID: "09b7b0c6-47cc-4860-8c78-9583bb5b0a6e") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754537 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.754528579 +0000 UTC m=+33.081591305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Dec 03 14:26:24.767476 master-0 kubenswrapper[4409]: E1203 14:26:24.754553 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls podName:8a12409a-0be3-4023-9df3-a0f091aac8dc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.75454567 +0000 UTC m=+33.081608386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls") pod "thanos-querier-cc996c4bd-j4hzr" (UID: "8a12409a-0be3-4023-9df3-a0f091aac8dc") : object "openshift-monitoring"/"thanos-querier-tls" not registered Dec 03 14:26:24.776177 master-0 kubenswrapper[4409]: E1203 14:26:24.754575 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.7545697 +0000 UTC m=+33.081632427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Dec 03 14:26:24.815088 master-0 kubenswrapper[4409]: I1203 14:26:24.814985 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:24.815590 master-0 kubenswrapper[4409]: I1203 14:26:24.815131 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:24.815590 master-0 kubenswrapper[4409]: I1203 14:26:24.815567 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815600 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815621 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815203 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815650 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815306 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815302 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815310 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: E1203 14:26:24.815299 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815683 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815345 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:24.815711 master-0 kubenswrapper[4409]: I1203 14:26:24.815346 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815740 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815794 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815386 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815389 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815387 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815393 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815402 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815404 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815404 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: E1203 14:26:24.815949 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815991 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815412 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815415 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815423 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815425 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815425 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815431 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815439 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815469 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815476 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815476 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:24.816323 master-0 kubenswrapper[4409]: I1203 14:26:24.815485 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816328 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815494 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815512 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815514 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815518 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815523 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815524 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815508 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815568 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815550 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815602 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815164 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815570 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815616 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815624 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815627 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816865 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815649 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815149 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815284 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815678 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815327 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815339 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815714 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815354 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.817042 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815762 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815774 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.817169 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815360 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815397 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815408 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.816027 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.816054 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.816082 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815509 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816682 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816647 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816764 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815636 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: I1203 14:26:24.815747 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.816188 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:24.817385 master-0 kubenswrapper[4409]: E1203 14:26:24.817352 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.817472 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.817582 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.817684 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.817817 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.817908 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818020 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818153 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818267 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818395 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818476 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818554 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818641 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818724 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818794 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818903 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:24.819153 master-0 kubenswrapper[4409]: E1203 14:26:24.818991 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:24.832267 master-0 kubenswrapper[4409]: E1203 14:26:24.819098 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:24.832369 master-0 kubenswrapper[4409]: E1203 14:26:24.819617 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:24.832592 master-0 kubenswrapper[4409]: E1203 14:26:24.832530 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:24.832839 master-0 kubenswrapper[4409]: E1203 14:26:24.832755 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:24.833073 master-0 kubenswrapper[4409]: E1203 14:26:24.832987 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:24.833242 master-0 kubenswrapper[4409]: E1203 14:26:24.833207 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:24.833347 master-0 kubenswrapper[4409]: E1203 14:26:24.833319 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:24.833591 master-0 kubenswrapper[4409]: E1203 14:26:24.833558 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:24.833718 master-0 kubenswrapper[4409]: E1203 14:26:24.833693 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:24.833828 master-0 kubenswrapper[4409]: E1203 14:26:24.833804 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:24.833920 master-0 kubenswrapper[4409]: E1203 14:26:24.833900 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:24.834017 master-0 kubenswrapper[4409]: E1203 14:26:24.833980 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:24.834312 master-0 kubenswrapper[4409]: E1203 14:26:24.834262 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:24.834677 master-0 kubenswrapper[4409]: E1203 14:26:24.834604 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:24.834825 master-0 kubenswrapper[4409]: E1203 14:26:24.834777 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:24.834974 master-0 kubenswrapper[4409]: E1203 14:26:24.834937 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:24.835099 master-0 kubenswrapper[4409]: E1203 14:26:24.835057 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:24.835341 master-0 kubenswrapper[4409]: E1203 14:26:24.835278 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:24.835580 master-0 kubenswrapper[4409]: E1203 14:26:24.835519 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:24.835869 master-0 kubenswrapper[4409]: E1203 14:26:24.835804 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:24.836118 master-0 kubenswrapper[4409]: E1203 14:26:24.836074 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:24.837139 master-0 kubenswrapper[4409]: E1203 14:26:24.837069 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:24.837731 master-0 kubenswrapper[4409]: E1203 14:26:24.837586 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:24.837731 master-0 kubenswrapper[4409]: E1203 14:26:24.837654 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:24.837731 master-0 kubenswrapper[4409]: E1203 14:26:24.837715 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:24.838659 master-0 kubenswrapper[4409]: E1203 14:26:24.837971 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:24.838659 master-0 kubenswrapper[4409]: E1203 14:26:24.838045 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:24.838659 master-0 kubenswrapper[4409]: E1203 14:26:24.838095 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:24.838659 master-0 kubenswrapper[4409]: E1203 14:26:24.838182 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:24.838659 master-0 kubenswrapper[4409]: E1203 14:26:24.838568 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:24.838927 master-0 kubenswrapper[4409]: E1203 14:26:24.838725 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:24.838927 master-0 kubenswrapper[4409]: E1203 14:26:24.838792 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:24.839035 master-0 kubenswrapper[4409]: E1203 14:26:24.838941 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:24.839127 master-0 kubenswrapper[4409]: E1203 14:26:24.839091 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:24.839308 master-0 kubenswrapper[4409]: E1203 14:26:24.839273 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:24.839447 master-0 kubenswrapper[4409]: E1203 14:26:24.839401 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:24.839599 master-0 kubenswrapper[4409]: E1203 14:26:24.839556 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:24.839667 master-0 kubenswrapper[4409]: E1203 14:26:24.839639 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:24.839786 master-0 kubenswrapper[4409]: E1203 14:26:24.839753 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:24.839896 master-0 kubenswrapper[4409]: E1203 14:26:24.839856 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:24.839970 master-0 kubenswrapper[4409]: E1203 14:26:24.839933 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:24.852912 master-0 kubenswrapper[4409]: I1203 14:26:24.852830 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:24.853090 master-0 kubenswrapper[4409]: I1203 14:26:24.852929 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.853090 master-0 kubenswrapper[4409]: I1203 14:26:24.853050 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.853207 master-0 kubenswrapper[4409]: E1203 14:26:24.853088 4409 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Dec 03 14:26:24.853207 master-0 kubenswrapper[4409]: I1203 14:26:24.853104 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:24.853293 master-0 kubenswrapper[4409]: E1203 14:26:24.853204 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume podName:4669137a-fbc4-41e1-8eeb-5f06b9da2641 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853175806 +0000 UTC m=+33.180238302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume") pod "dns-default-5m4f8" (UID: "4669137a-fbc4-41e1-8eeb-5f06b9da2641") : object "openshift-dns"/"dns-default" not registered Dec 03 14:26:24.853359 master-0 kubenswrapper[4409]: E1203 14:26:24.853220 4409 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:24.853359 master-0 kubenswrapper[4409]: E1203 14:26:24.853315 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:24.853454 master-0 kubenswrapper[4409]: E1203 14:26:24.853367 4409 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Dec 03 14:26:24.853454 master-0 kubenswrapper[4409]: E1203 14:26:24.853389 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:24.853454 master-0 kubenswrapper[4409]: I1203 14:26:24.853320 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:24.853587 master-0 kubenswrapper[4409]: E1203 14:26:24.853408 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853381962 +0000 UTC m=+33.180444658 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Dec 03 14:26:24.853587 master-0 kubenswrapper[4409]: E1203 14:26:24.853519 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853504915 +0000 UTC m=+33.180567621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : object "openshift-console"/"console-config" not registered Dec 03 14:26:24.853587 master-0 kubenswrapper[4409]: E1203 14:26:24.853554 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853543086 +0000 UTC m=+33.180605832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Dec 03 14:26:24.853587 master-0 kubenswrapper[4409]: E1203 14:26:24.853582 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853571347 +0000 UTC m=+33.180634083 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Dec 03 14:26:24.853823 master-0 kubenswrapper[4409]: I1203 14:26:24.853671 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:24.853823 master-0 kubenswrapper[4409]: I1203 14:26:24.853758 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:24.853823 master-0 kubenswrapper[4409]: E1203 14:26:24.853806 4409 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:24.853823 master-0 kubenswrapper[4409]: I1203 14:26:24.853820 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:24.853992 master-0 kubenswrapper[4409]: E1203 14:26:24.853863 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853832444 +0000 UTC m=+33.180894960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : object "openshift-controller-manager"/"client-ca" not registered Dec 03 14:26:24.853992 master-0 kubenswrapper[4409]: E1203 14:26:24.853888 4409 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:24.853992 master-0 kubenswrapper[4409]: I1203 14:26:24.853906 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:24.853992 master-0 kubenswrapper[4409]: E1203 14:26:24.853920 4409 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:24.853992 master-0 kubenswrapper[4409]: E1203 14:26:24.853943 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config podName:5d838c1a-22e2-4096-9739-7841ef7d06ba nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853918567 +0000 UTC m=+33.180981073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config") pod "alertmanager-main-0" (UID: "5d838c1a-22e2-4096-9739-7841ef7d06ba") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854037 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.853998499 +0000 UTC m=+33.181061005 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: I1203 14:26:24.854040 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: I1203 14:26:24.854081 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854048 4409 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: I1203 14:26:24.854126 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854134 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: I1203 14:26:24.854152 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854173 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs podName:b3c1ebb9-f052-410b-a999-45e9b75b0e58 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854157914 +0000 UTC m=+33.181220410 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs") pod "network-metrics-daemon-ch7xd" (UID: "b3c1ebb9-f052-410b-a999-45e9b75b0e58") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854176 4409 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854198 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854182624 +0000 UTC m=+33.181245130 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854208 4409 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854219 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854208635 +0000 UTC m=+33.181271151 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : object "openshift-insights"/"trusted-ca-bundle" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854215 4409 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:24.854246 master-0 kubenswrapper[4409]: E1203 14:26:24.854238 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854229576 +0000 UTC m=+33.181292082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854265 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854314 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config podName:829d285f-d532-45e4-b1ec-54adbc21b9f9 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854303978 +0000 UTC m=+33.181366474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-764cbf5554-kftwv" (UID: "829d285f-d532-45e4-b1ec-54adbc21b9f9") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854382 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854395 4409 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854409 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854426 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854420381 +0000 UTC m=+33.181482887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854448 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854456 4409 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854475 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854481 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls podName:ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854474413 +0000 UTC m=+33.181536919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-69cc794c58-mfjk2" (UID: "ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854481 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854498 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854513 4409 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: I1203 14:26:24.854512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854529 4409 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854548 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854540205 +0000 UTC m=+33.181602801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854570 4409 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854572 4409 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854582 4409 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854591 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854585286 +0000 UTC m=+33.181647792 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854816 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls podName:56649bd4-ac30-4a70-8024-772294fede88 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854799372 +0000 UTC m=+33.181861868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "56649bd4-ac30-4a70-8024-772294fede88") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854841 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config podName:74e39dce-29d5-4b2a-ab19-386b6cdae94d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854833063 +0000 UTC m=+33.181895569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-57cbc648f8-q4cgg" (UID: "74e39dce-29d5-4b2a-ab19-386b6cdae94d") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Dec 03 14:26:24.854852 master-0 kubenswrapper[4409]: E1203 14:26:24.854860 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls podName:8eee1d96-2f58-41a6-ae51-c158b29fc813 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.854852573 +0000 UTC m=+33.181915079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls") pod "kube-state-metrics-7dcc7f9bd6-68wml" (UID: "8eee1d96-2f58-41a6-ae51-c158b29fc813") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Dec 03 14:26:24.856166 master-0 kubenswrapper[4409]: I1203 14:26:24.855063 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:24.856166 master-0 kubenswrapper[4409]: E1203 14:26:24.855218 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:24.856166 master-0 kubenswrapper[4409]: E1203 14:26:24.855237 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:24.856166 master-0 kubenswrapper[4409]: E1203 14:26:24.855248 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p6dpf for pod openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:24.856166 master-0 kubenswrapper[4409]: E1203 14:26:24.855285 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf podName:f8c6a484-5f0e-4abc-bc48-934ad0ffde0a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:40.855276155 +0000 UTC m=+33.182338661 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6dpf" (UniqueName: "kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf") pod "network-check-source-6964bb78b7-g4lv2" (UID: "f8c6a484-5f0e-4abc-bc48-934ad0ffde0a") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061105 master-0 kubenswrapper[4409]: I1203 14:26:25.061018 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:25.061105 master-0 kubenswrapper[4409]: I1203 14:26:25.061093 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:25.061365 master-0 kubenswrapper[4409]: I1203 14:26:25.061169 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:25.061365 master-0 kubenswrapper[4409]: I1203 14:26:25.061254 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:25.061365 master-0 kubenswrapper[4409]: I1203 14:26:25.061280 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:25.061473 master-0 kubenswrapper[4409]: I1203 14:26:25.061361 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:25.061473 master-0 kubenswrapper[4409]: E1203 14:26:25.061396 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:25.061473 master-0 kubenswrapper[4409]: I1203 14:26:25.061437 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: E1203 14:26:25.061447 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: I1203 14:26:25.061511 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: E1203 14:26:25.061519 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: E1203 14:26:25.061531 4409 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: E1203 14:26:25.061540 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rjbsl for pod openshift-machine-api/machine-api-operator-7486ff55f-wcnxg: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061564 master-0 kubenswrapper[4409]: E1203 14:26:25.061534 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061600 4409 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061608 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061619 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cgq6z for pod openshift-etcd-operator/etcd-operator-7978bf889c-n64v4: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061623 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061456 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061644 4409 projected.go:194] Error preparing data for projected volume kube-api-access-d8bbn for pod openshift-console/console-6c9c84854-xf7nv: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061650 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ncwtx for pod openshift-marketplace/redhat-marketplace-ddwmn: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061553 4409 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061705 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jzlgx for pod openshift-service-ca/service-ca-6b8bb995f7-b68p8: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061709 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx podName:614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.061683799 +0000 UTC m=+33.388746305 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ncwtx" (UniqueName: "kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx") pod "redhat-marketplace-ddwmn" (UID: "614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.061745 master-0 kubenswrapper[4409]: E1203 14:26:25.061735 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z podName:52100521-67e9-40c9-887c-eda6560f06e0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.06172689 +0000 UTC m=+33.388789396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cgq6z" (UniqueName: "kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z") pod "etcd-operator-7978bf889c-n64v4" (UID: "52100521-67e9-40c9-887c-eda6560f06e0") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061786 4409 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061792 4409 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061803 4409 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061809 4409 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061818 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zhc87 for pod openshift-insights/insights-operator-59d99f9b7b-74sss: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061821 4409 projected.go:194] Error preparing data for projected volume kube-api-access-t8knq for pod openshift-catalogd/catalogd-controller-manager-754cfd84-qf898: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061836 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl podName:e9f484c1-1564-49c7-a43d-bd8b971cea20 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.061815903 +0000 UTC m=+33.388878619 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjbsl" (UniqueName: "kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl") pod "machine-api-operator-7486ff55f-wcnxg" (UID: "e9f484c1-1564-49c7-a43d-bd8b971cea20") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061801 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061865 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061875 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq podName:69b752ed-691c-4574-a01e-428d4bf85b75 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.061852794 +0000 UTC m=+33.388915320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t8knq" (UniqueName: "kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq") pod "catalogd-controller-manager-754cfd84-qf898" (UID: "69b752ed-691c-4574-a01e-428d4bf85b75") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061881 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p7ss6 for pod openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061896 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx podName:36da3c2f-860c-4188-a7d7-5b615981a835 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.061887815 +0000 UTC m=+33.388950331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jzlgx" (UniqueName: "kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx") pod "service-ca-6b8bb995f7-b68p8" (UID: "36da3c2f-860c-4188-a7d7-5b615981a835") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.061934 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6 podName:d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.061925206 +0000 UTC m=+33.388987932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p7ss6" (UniqueName: "kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6") pod "packageserver-7c64dd9d8b-49skr" (UID: "d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.062061 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87 podName:c95705e3-17ef-40fe-89e8-22586a32621b nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.062044899 +0000 UTC m=+33.389107415 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zhc87" (UniqueName: "kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87") pod "insights-operator-59d99f9b7b-74sss" (UID: "c95705e3-17ef-40fe-89e8-22586a32621b") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062099 master-0 kubenswrapper[4409]: E1203 14:26:25.062084 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn podName:8b442f72-b113-4227-93b5-ea1ae90d5154 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.06207615 +0000 UTC m=+33.389138666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-d8bbn" (UniqueName: "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn") pod "console-6c9c84854-xf7nv" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.062917 master-0 kubenswrapper[4409]: I1203 14:26:25.062882 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:25.062961 master-0 kubenswrapper[4409]: I1203 14:26:25.062939 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:25.062993 master-0 kubenswrapper[4409]: I1203 14:26:25.062970 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:25.063047 master-0 kubenswrapper[4409]: I1203 14:26:25.063023 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:25.063129 master-0 kubenswrapper[4409]: I1203 14:26:25.063103 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:25.063173 master-0 kubenswrapper[4409]: I1203 14:26:25.063151 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:25.063336 master-0 kubenswrapper[4409]: I1203 14:26:25.063311 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:25.063440 master-0 kubenswrapper[4409]: I1203 14:26:25.063417 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:25.063473 master-0 kubenswrapper[4409]: I1203 14:26:25.063464 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:25.063547 master-0 kubenswrapper[4409]: I1203 14:26:25.063530 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:25.063785 master-0 kubenswrapper[4409]: I1203 14:26:25.063760 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:25.063819 master-0 kubenswrapper[4409]: I1203 14:26:25.063795 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:25.063891 master-0 kubenswrapper[4409]: I1203 14:26:25.063874 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:25.064013 master-0 kubenswrapper[4409]: I1203 14:26:25.063982 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:25.064131 master-0 kubenswrapper[4409]: I1203 14:26:25.064112 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:25.064192 master-0 kubenswrapper[4409]: I1203 14:26:25.064172 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:25.064320 master-0 kubenswrapper[4409]: I1203 14:26:25.064302 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:25.064370 master-0 kubenswrapper[4409]: I1203 14:26:25.064350 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:25.064576 master-0 kubenswrapper[4409]: I1203 14:26:25.064393 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:25.064576 master-0 kubenswrapper[4409]: I1203 14:26:25.064512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:25.064649 master-0 kubenswrapper[4409]: I1203 14:26:25.064578 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:25.064681 master-0 kubenswrapper[4409]: I1203 14:26:25.064647 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:25.064710 master-0 kubenswrapper[4409]: I1203 14:26:25.064688 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:25.064922 master-0 kubenswrapper[4409]: E1203 14:26:25.064891 4409 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:25.064922 master-0 kubenswrapper[4409]: E1203 14:26:25.064919 4409 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.064983 master-0 kubenswrapper[4409]: E1203 14:26:25.064933 4409 projected.go:194] Error preparing data for projected volume kube-api-access-cbzpz for pod openshift-apiserver/apiserver-6985f84b49-v9vlg: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065027 master-0 kubenswrapper[4409]: E1203 14:26:25.064979 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz podName:a969ddd4-e20d-4dd2-84f4-a140bac65df0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.064964362 +0000 UTC m=+33.392027018 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cbzpz" (UniqueName: "kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz") pod "apiserver-6985f84b49-v9vlg" (UID: "a969ddd4-e20d-4dd2-84f4-a140bac65df0") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065089 master-0 kubenswrapper[4409]: E1203 14:26:25.065071 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065135 master-0 kubenswrapper[4409]: E1203 14:26:25.065090 4409 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065135 master-0 kubenswrapper[4409]: E1203 14:26:25.065100 4409 projected.go:194] Error preparing data for projected volume kube-api-access-28n2f for pod openshift-ingress-canary/ingress-canary-vkpv4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065135 master-0 kubenswrapper[4409]: E1203 14:26:25.065130 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f podName:e3675c78-1902-4b92-8a93-cf2dc316f060 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065121026 +0000 UTC m=+33.392183542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-28n2f" (UniqueName: "kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f") pod "ingress-canary-vkpv4" (UID: "e3675c78-1902-4b92-8a93-cf2dc316f060") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065220 master-0 kubenswrapper[4409]: E1203 14:26:25.065185 4409 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065220 master-0 kubenswrapper[4409]: E1203 14:26:25.065197 4409 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065220 master-0 kubenswrapper[4409]: E1203 14:26:25.065206 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v7d88 for pod openshift-authentication/oauth-openshift-747bdb58b5-mn76f: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065300 master-0 kubenswrapper[4409]: E1203 14:26:25.065234 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88 podName:b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065224859 +0000 UTC m=+33.392287375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v7d88" (UniqueName: "kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88") pod "oauth-openshift-747bdb58b5-mn76f" (UID: "b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065300 master-0 kubenswrapper[4409]: E1203 14:26:25.065286 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065300 master-0 kubenswrapper[4409]: E1203 14:26:25.065298 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065421 master-0 kubenswrapper[4409]: E1203 14:26:25.065309 4409 projected.go:194] Error preparing data for projected volume kube-api-access-rb6pb for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065421 master-0 kubenswrapper[4409]: E1203 14:26:25.065336 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb podName:918ff36b-662f-46ae-b71a-301df7e67735 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065327532 +0000 UTC m=+33.392390218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rb6pb" (UniqueName: "kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb") pod "kube-storage-version-migrator-operator-67c4cff67d-q2lxz" (UID: "918ff36b-662f-46ae-b71a-301df7e67735") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065421 master-0 kubenswrapper[4409]: E1203 14:26:25.065390 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065421 master-0 kubenswrapper[4409]: E1203 14:26:25.065402 4409 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065421 master-0 kubenswrapper[4409]: E1203 14:26:25.065411 4409 projected.go:194] Error preparing data for projected volume kube-api-access-n798x for pod openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065575 master-0 kubenswrapper[4409]: E1203 14:26:25.065436 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x podName:e89bc996-818b-46b9-ad39-a12457acd4bb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065428215 +0000 UTC m=+33.392490731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n798x" (UniqueName: "kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x") pod "controller-manager-7d7ddcf759-pvkrm" (UID: "e89bc996-818b-46b9-ad39-a12457acd4bb") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065575 master-0 kubenswrapper[4409]: E1203 14:26:25.065486 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065575 master-0 kubenswrapper[4409]: E1203 14:26:25.065498 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065575 master-0 kubenswrapper[4409]: E1203 14:26:25.065508 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wqkdr for pod openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065575 master-0 kubenswrapper[4409]: E1203 14:26:25.065534 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr podName:63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065525428 +0000 UTC m=+33.392587944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wqkdr" (UniqueName: "kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr") pod "csi-snapshot-controller-86897dd478-qqwh7" (UID: "63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065586 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065597 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065606 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nc9nj for pod openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065633 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj podName:6b95a5a6-db93-4a58-aaff-3619d130c8cb nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065622911 +0000 UTC m=+33.392685427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nc9nj" (UniqueName: "kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj") pod "cluster-storage-operator-f84784664-ntb9w" (UID: "6b95a5a6-db93-4a58-aaff-3619d130c8cb") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065704 4409 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065719 master-0 kubenswrapper[4409]: E1203 14:26:25.065717 4409 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065726 4409 projected.go:194] Error preparing data for projected volume kube-api-access-pj4f8 for pod openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065752 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8 podName:0b4c4f1f-d61e-483e-8c0b-6e2774437e4d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065743684 +0000 UTC m=+33.392806200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pj4f8" (UniqueName: "kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8") pod "openshift-config-operator-68c95b6cf5-fmdmz" (UID: "0b4c4f1f-d61e-483e-8c0b-6e2774437e4d") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065805 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065817 4409 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065826 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jn5h6 for pod openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.065882 master-0 kubenswrapper[4409]: E1203 14:26:25.065850 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6 podName:eefee934-ac6b-44e3-a6be-1ae62362ab4f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.065842487 +0000 UTC m=+33.392905183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jn5h6" (UniqueName: "kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6") pod "cloud-credential-operator-7c4dc67499-tjwg8" (UID: "eefee934-ac6b-44e3-a6be-1ae62362ab4f") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.065901 4409 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.065912 4409 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.065921 4409 projected.go:194] Error preparing data for projected volume kube-api-access-p5mrw for pod openshift-console-operator/console-operator-77df56447c-vsrxx: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.065946 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw podName:a8dc6511-7339-4269-9d43-14ce53bb4e7f nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.06593791 +0000 UTC m=+33.393000426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5mrw" (UniqueName: "kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw") pod "console-operator-77df56447c-vsrxx" (UID: "a8dc6511-7339-4269-9d43-14ce53bb4e7f") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.065998 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.066028 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.066038 4409 projected.go:194] Error preparing data for projected volume kube-api-access-7q659 for pod openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.066064 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659 podName:faa79e15-1875-4865-b5e0-aecd4c447bad nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066056253 +0000 UTC m=+33.393118769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7q659" (UniqueName: "kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659") pod "package-server-manager-75b4d49d4c-h599p" (UID: "faa79e15-1875-4865-b5e0-aecd4c447bad") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.066115 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066123 master-0 kubenswrapper[4409]: E1203 14:26:25.066126 4409 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066137 4409 projected.go:194] Error preparing data for projected volume kube-api-access-tfs27 for pod openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066163 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27 podName:1c562495-1290-4792-b4b2-639faa594ae2 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066155226 +0000 UTC m=+33.393217742 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tfs27" (UniqueName: "kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27") pod "openshift-apiserver-operator-667484ff5-n7qz8" (UID: "1c562495-1290-4792-b4b2-639faa594ae2") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066213 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066249 4409 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066261 4409 projected.go:194] Error preparing data for projected volume kube-api-access-czfkv for pod openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066287 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv podName:0535e784-8e28-4090-aa2e-df937910767c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066278109 +0000 UTC m=+33.393340625 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-czfkv" (UniqueName: "kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv") pod "authentication-operator-7479ffdf48-hpdzl" (UID: "0535e784-8e28-4090-aa2e-df937910767c") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066339 4409 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066349 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066376 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access podName:b051ae27-7879-448d-b426-4dce76e29739 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066366872 +0000 UTC m=+33.393429398 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access") pod "kube-controller-manager-operator-b5dddf8f5-kwb74" (UID: "b051ae27-7879-448d-b426-4dce76e29739") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066427 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066434 master-0 kubenswrapper[4409]: E1203 14:26:25.066437 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066449 4409 projected.go:194] Error preparing data for projected volume kube-api-access-lfdn2 for pod openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066475 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2 podName:b3eef3ef-f954-4e47-92b4-0155bc27332d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066467455 +0000 UTC m=+33.393529981 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-lfdn2" (UniqueName: "kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2") pod "olm-operator-76bd5d69c7-fjrrg" (UID: "b3eef3ef-f954-4e47-92b4-0155bc27332d") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066525 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066536 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066559 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access podName:06d774e5-314a-49df-bdca-8e780c9af25a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066552237 +0000 UTC m=+33.393614753 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access") pod "kube-apiserver-operator-5b557b5f57-s5s96" (UID: "06d774e5-314a-49df-bdca-8e780c9af25a") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066612 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066623 4409 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066632 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fw8h8 for pod openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066658 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8 podName:803897bb-580e-4f7a-9be2-583fc607d1f6 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.06664886 +0000 UTC m=+33.393711386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fw8h8" (UniqueName: "kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8") pod "cluster-olm-operator-589f5cdc9d-5h2kn" (UID: "803897bb-580e-4f7a-9be2-583fc607d1f6") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066712 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066722 4409 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066731 4409 projected.go:194] Error preparing data for projected volume kube-api-access-2fns8 for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.066748 master-0 kubenswrapper[4409]: E1203 14:26:25.066756 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8 podName:c180b512-bf0c-4ddc-a5cf-f04acc830a61 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066747622 +0000 UTC m=+33.393810138 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2fns8" (UniqueName: "kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8") pod "csi-snapshot-controller-operator-7b795784b8-44frm" (UID: "c180b512-bf0c-4ddc-a5cf-f04acc830a61") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066807 4409 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066818 4409 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066827 4409 projected.go:194] Error preparing data for projected volume kube-api-access-c5nch for pod openshift-console/downloads-6f5db8559b-96ljh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066851 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch podName:6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066844085 +0000 UTC m=+33.393906601 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-c5nch" (UniqueName: "kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch") pod "downloads-6f5db8559b-96ljh" (UID: "6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066903 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066916 4409 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.066925 4409 projected.go:194] Error preparing data for projected volume kube-api-access-jkbcq for pod openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.067068 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.067083 4409 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.067106 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nrngd for pod openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067180 master-0 kubenswrapper[4409]: E1203 14:26:25.067179 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067194 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067206 4409 projected.go:194] Error preparing data for projected volume kube-api-access-nxt87 for pod openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067269 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq podName:adbcce01-7282-4a75-843a-9623060346f0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.066946808 +0000 UTC m=+33.394009324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jkbcq" (UniqueName: "kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq") pod "openshift-controller-manager-operator-7c4697b5f5-9f69p" (UID: "adbcce01-7282-4a75-843a-9623060346f0") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067323 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd podName:f1f2d0e1-eaaf-4037-a976-5fc2a942c50c nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.067283368 +0000 UTC m=+33.394346024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrngd" (UniqueName: "kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd") pod "service-ca-operator-56f5898f45-fhnc5" (UID: "f1f2d0e1-eaaf-4037-a976-5fc2a942c50c") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067378 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87 podName:55351b08-d46d-4327-aa5e-ae17fdffdfb5 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.0673675 +0000 UTC m=+33.394430206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nxt87" (UniqueName: "kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87") pod "marketplace-operator-7d67745bb7-dwcxb" (UID: "55351b08-d46d-4327-aa5e-ae17fdffdfb5") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.067464 master-0 kubenswrapper[4409]: E1203 14:26:25.067424 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067703 master-0 kubenswrapper[4409]: E1203 14:26:25.067473 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.067703 master-0 kubenswrapper[4409]: E1203 14:26:25.067569 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access podName:5aa67ace-d03a-4d06-9fb5-24777b65f2cc nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.067550125 +0000 UTC m=+33.394612721 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access") pod "openshift-kube-scheduler-operator-5f574c6c79-86bh9" (UID: "5aa67ace-d03a-4d06-9fb5-24777b65f2cc") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.776249 master-0 kubenswrapper[4409]: I1203 14:26:25.775790 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:25.776249 master-0 kubenswrapper[4409]: I1203 14:26:25.776068 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776397 4409 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776458 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-6-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776401 4409 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776558 4409 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-6-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776645 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access podName:6be147fe-84e2-429b-9d53-91fd67fef7c4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.776592794 +0000 UTC m=+34.103655350 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access") pod "installer-6-master-0" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: E1203 14:26:25.776673 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access podName:9c016f10-6cf2-4409-9365-05ae2e2adc5a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.776666036 +0000 UTC m=+34.103728612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access") pod "installer-6-master-0" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: I1203 14:26:25.776953 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: I1203 14:26:25.776994 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:25.777292 master-0 kubenswrapper[4409]: I1203 14:26:25.777277 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777318 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777399 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777316 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777527 4409 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777544 4409 projected.go:194] Error preparing data for projected volume kube-api-access-dmqvl for pod openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777554 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777382 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777594 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777601 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777608 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777637 4409 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777643 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777654 4409 projected.go:194] Error preparing data for projected volume kube-api-access-djxkd for pod openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777667 4409 projected.go:194] Error preparing data for projected volume kube-api-access-5mk6r for pod openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777688 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777697 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777706 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777713 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777717 4409 projected.go:194] Error preparing data for projected volume kube-api-access-mhf9r for pod openshift-marketplace/redhat-operators-6z4sc: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777721 4409 projected.go:194] Error preparing data for projected volume kube-api-access-zcqxx for pod openshift-marketplace/community-operators-7fwtv: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777713 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: E1203 14:26:25.777613 4409 projected.go:194] Error preparing data for projected volume kube-api-access-ltsnd for pod openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777637 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:25.778030 master-0 kubenswrapper[4409]: I1203 14:26:25.777766 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778319 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd podName:98392f8e-0285-4bc3-95a9-d29033639ca3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778255091 +0000 UTC m=+34.105317607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-djxkd" (UniqueName: "kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd") pod "dns-operator-6b7bcd6566-jh9m8" (UID: "98392f8e-0285-4bc3-95a9-d29033639ca3") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778402 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r podName:ab40dfa2-d8f8-4300-8a10-5aa73e1d6294 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778393325 +0000 UTC m=+34.105455831 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-5mk6r" (UniqueName: "kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r") pod "control-plane-machine-set-operator-66f4cc99d4-x278n" (UID: "ab40dfa2-d8f8-4300-8a10-5aa73e1d6294") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778431 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx podName:bff18a80-0b0f-40ab-862e-e8b1ab32040a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778423756 +0000 UTC m=+34.105486262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zcqxx" (UniqueName: "kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx") pod "community-operators-7fwtv" (UID: "bff18a80-0b0f-40ab-862e-e8b1ab32040a") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778446 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r podName:911f6333-cdb0-425c-b79b-f892444b7097 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778440256 +0000 UTC m=+34.105502762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-mhf9r" (UniqueName: "kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r") pod "redhat-operators-6z4sc" (UID: "911f6333-cdb0-425c-b79b-f892444b7097") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778467 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl podName:33a557d1-cdd9-47ff-afbd-a301e7f589a7 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778461287 +0000 UTC m=+34.105523793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dmqvl" (UniqueName: "kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl") pod "route-controller-manager-74cff6cf84-bh8rz" (UID: "33a557d1-cdd9-47ff-afbd-a301e7f589a7") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.778599 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778670 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd podName:7663a25e-236d-4b1d-83ce-733ab146dee3 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778644702 +0000 UTC m=+34.105707208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-ltsnd" (UniqueName: "kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd") pod "cluster-autoscaler-operator-7f88444875-6dk29" (UID: "7663a25e-236d-4b1d-83ce-733ab146dee3") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778778 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.778777 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778793 4409 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778805 4409 projected.go:194] Error preparing data for projected volume kube-api-access-92p99 for pod openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.778832 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778867 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99 podName:b340553b-d483-4839-8328-518f27770832 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.778853708 +0000 UTC m=+34.105916214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-92p99" (UniqueName: "kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99") pod "cluster-samples-operator-6d64b47964-jjd7h" (UID: "b340553b-d483-4839-8328-518f27770832") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778951 4409 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778970 4409 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778977 4409 projected.go:194] Error preparing data for projected volume kube-api-access-8wh8g for pod openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.778979 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.779031 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779034 4409 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779119 4409 projected.go:194] Error preparing data for projected volume kube-api-access-bwck4 for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779132 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g podName:690d1f81-7b1f-4fd0-9b6e-154c9687c744 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779092605 +0000 UTC m=+34.106155141 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-8wh8g" (UniqueName: "kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g") pod "cluster-baremetal-operator-5fdc576499-j2n8j" (UID: "690d1f81-7b1f-4fd0-9b6e-154c9687c744") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779155 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4 podName:82bd0ae5-b35d-47c8-b693-b27a9a56476d nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779149056 +0000 UTC m=+34.106211562 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwck4" (UniqueName: "kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4") pod "operator-controller-controller-manager-5f78c89466-bshxw" (UID: "82bd0ae5-b35d-47c8-b693-b27a9a56476d") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779178 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779222 4409 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779235 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779249 4409 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779256 4409 projected.go:194] Error preparing data for projected volume kube-api-access-v429m for pod openshift-network-diagnostics/network-check-target-pcchm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779275 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m podName:6d38d102-4efe-4ed3-ae23-b1e295cdaccd nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.77926993 +0000 UTC m=+34.106332436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v429m" (UniqueName: "kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m") pod "network-check-target-pcchm" (UID: "6d38d102-4efe-4ed3-ae23-b1e295cdaccd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.779192 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: I1203 14:26:25.779313 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:25.779263 master-0 kubenswrapper[4409]: E1203 14:26:25.779236 4409 projected.go:194] Error preparing data for projected volume kube-api-access-wwv7s for pod openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779408 4409 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779426 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: I1203 14:26:25.779418 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779447 4409 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779448 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s podName:6f723d97-5c65-4ae7-9085-26db8b4f2f52 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779427004 +0000 UTC m=+34.106489510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wwv7s" (UniqueName: "kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s") pod "migrator-5bcf58cf9c-dvklg" (UID: "6f723d97-5c65-4ae7-9085-26db8b4f2f52") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779462 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779458 4409 projected.go:194] Error preparing data for projected volume kube-api-access-9cnd5 for pod openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779475 4409 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779484 4409 projected.go:194] Error preparing data for projected volume kube-api-access-m789m for pod openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779433 4409 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779509 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m podName:24dfafc9-86a9-450e-ac62-a871138106c0 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779502886 +0000 UTC m=+34.106565392 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-m789m" (UniqueName: "kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m") pod "apiserver-57fd58bc7b-kktql" (UID: "24dfafc9-86a9-450e-ac62-a871138106c0") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779512 4409 projected.go:194] Error preparing data for projected volume kube-api-access-fn7fm for pod openshift-marketplace/certified-operators-t8rt7: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: I1203 14:26:25.779527 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779542 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm podName:a192c38a-4bfa-40fe-9a2d-d48260cf6443 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779533077 +0000 UTC m=+34.106595583 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fn7fm" (UniqueName: "kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm") pod "certified-operators-t8rt7" (UID: "a192c38a-4bfa-40fe-9a2d-d48260cf6443") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779580 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779590 4409 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779596 4409 projected.go:194] Error preparing data for projected volume kube-api-access-x22gr for pod openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779623 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr podName:bcc78129-4a81-410e-9a42-b12043b5a75a nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.77961752 +0000 UTC m=+34.106680026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-x22gr" (UniqueName: "kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr") pod "ingress-operator-85dbd94574-8jfp5" (UID: "bcc78129-4a81-410e-9a42-b12043b5a75a") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Dec 03 14:26:25.780670 master-0 kubenswrapper[4409]: E1203 14:26:25.779731 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5 podName:a5b3c1fb-6f81-4067-98da-681d6c7c33e4 nodeName:}" failed. No retries permitted until 2025-12-03 14:26:41.779714753 +0000 UTC m=+34.106777339 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-9cnd5" (UniqueName: "kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5") pod "catalog-operator-7cf5cf757f-zgm6l" (UID: "a5b3c1fb-6f81-4067-98da-681d6c7c33e4") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Dec 03 14:26:26.416356 master-0 kubenswrapper[4409]: I1203 14:26:26.416240 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:26.416356 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:26.416356 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:26.416356 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:26.416356 master-0 kubenswrapper[4409]: I1203 14:26:26.416304 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815205 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.815375 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815451 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815457 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.815516 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815532 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.815593 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815636 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815652 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815764 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.815848 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815853 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815894 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815911 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815943 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815949 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815982 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815992 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816047 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.815219 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816151 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.816156 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816189 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816232 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816234 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816268 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816302 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816306 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816340 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816363 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816382 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816405 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816436 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.816513 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816524 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816562 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816574 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816606 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816617 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816639 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816659 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816677 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816699 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816724 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816746 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816770 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816800 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816837 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816844 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.816947 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816956 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.816982 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817027 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817042 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817055 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817070 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817078 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817093 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817099 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817116 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817126 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817145 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817156 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817166 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817185 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.817236 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817277 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817314 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817339 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817376 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817396 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817439 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817469 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.817518 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817567 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817591 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.817734 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: I1203 14:26:26.817760 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.817988 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818051 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818120 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818182 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818200 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818219 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818284 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818345 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.818391 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819181 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819241 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819313 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819350 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819392 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819439 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819493 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819548 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819597 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:26.819505 master-0 kubenswrapper[4409]: E1203 14:26:26.819643 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.819684 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.819727 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.819788 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820030 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820157 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820356 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820469 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820568 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820662 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820740 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.820844 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821089 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821221 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821315 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821387 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821512 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821612 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821689 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821780 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821860 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.821953 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822065 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822164 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822254 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822357 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822427 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822503 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822601 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822699 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822773 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822846 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822914 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.822981 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.823065 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.823147 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.825612 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.825766 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.827415 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:26.829282 master-0 kubenswrapper[4409]: E1203 14:26:26.827536 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:27.415385 master-0 kubenswrapper[4409]: I1203 14:26:27.415271 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:27.415385 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:27.415385 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:27.415385 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:27.415385 master-0 kubenswrapper[4409]: I1203 14:26:27.415365 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:28.025575 master-0 kubenswrapper[4409]: E1203 14:26:28.025483 4409 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 03 14:26:28.411221 master-0 kubenswrapper[4409]: I1203 14:26:28.411174 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:28.411995 master-0 kubenswrapper[4409]: I1203 14:26:28.411443 4409 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 03 14:26:28.420642 master-0 kubenswrapper[4409]: I1203 14:26:28.416779 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:28.420642 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:28.420642 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:28.420642 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:28.420642 master-0 kubenswrapper[4409]: I1203 14:26:28.417287 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:28.437484 master-0 kubenswrapper[4409]: I1203 14:26:28.437388 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-txl6b" Dec 03 14:26:28.814955 master-0 kubenswrapper[4409]: I1203 14:26:28.814879 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:28.815192 master-0 kubenswrapper[4409]: I1203 14:26:28.815033 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:28.815192 master-0 kubenswrapper[4409]: I1203 14:26:28.814966 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: I1203 14:26:28.815201 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: E1203 14:26:28.815210 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: I1203 14:26:28.815228 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: I1203 14:26:28.815243 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: I1203 14:26:28.815266 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:28.815299 master-0 kubenswrapper[4409]: I1203 14:26:28.815293 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815301 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815320 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815268 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815303 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815351 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815359 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815372 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815389 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:28.815497 master-0 kubenswrapper[4409]: I1203 14:26:28.815333 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815548 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815578 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: E1203 14:26:28.815580 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815610 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815619 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815622 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815650 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815699 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815719 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815666 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815740 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815717 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815762 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:28.815773 master-0 kubenswrapper[4409]: I1203 14:26:28.815668 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815795 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815806 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815815 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815688 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815777 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815673 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: E1203 14:26:28.815879 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815812 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815907 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815819 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815918 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815681 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815730 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815942 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815946 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815958 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815955 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815927 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815979 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815969 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.815999 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816021 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816000 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816039 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816030 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816066 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: E1203 14:26:28.816113 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816135 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816146 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816137 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816161 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816162 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816175 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816175 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816190 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:28.816180 master-0 kubenswrapper[4409]: I1203 14:26:28.816206 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: E1203 14:26:28.816319 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: I1203 14:26:28.816361 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: I1203 14:26:28.816378 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: E1203 14:26:28.816422 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: E1203 14:26:28.817160 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: E1203 14:26:28.817253 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:28.817395 master-0 kubenswrapper[4409]: E1203 14:26:28.817338 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:28.817651 master-0 kubenswrapper[4409]: E1203 14:26:28.817446 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:28.817689 master-0 kubenswrapper[4409]: E1203 14:26:28.817664 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:28.817799 master-0 kubenswrapper[4409]: E1203 14:26:28.817757 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:28.817942 master-0 kubenswrapper[4409]: E1203 14:26:28.817859 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:28.818025 master-0 kubenswrapper[4409]: E1203 14:26:28.817950 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:28.818108 master-0 kubenswrapper[4409]: E1203 14:26:28.818082 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:28.818180 master-0 kubenswrapper[4409]: E1203 14:26:28.818149 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:28.818283 master-0 kubenswrapper[4409]: E1203 14:26:28.818251 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:28.818379 master-0 kubenswrapper[4409]: E1203 14:26:28.818349 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:28.818438 master-0 kubenswrapper[4409]: E1203 14:26:28.818412 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:28.818513 master-0 kubenswrapper[4409]: E1203 14:26:28.818488 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:28.818557 master-0 kubenswrapper[4409]: I1203 14:26:28.818503 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:28.818619 master-0 kubenswrapper[4409]: E1203 14:26:28.818576 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:28.818820 master-0 kubenswrapper[4409]: E1203 14:26:28.818748 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:28.828466 master-0 kubenswrapper[4409]: I1203 14:26:28.815226 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:28.828466 master-0 kubenswrapper[4409]: E1203 14:26:28.828444 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:28.828466 master-0 kubenswrapper[4409]: E1203 14:26:28.828414 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:28.828754 master-0 kubenswrapper[4409]: E1203 14:26:28.828424 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:28.828754 master-0 kubenswrapper[4409]: E1203 14:26:28.828580 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:28.828754 master-0 kubenswrapper[4409]: E1203 14:26:28.828563 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:28.828754 master-0 kubenswrapper[4409]: E1203 14:26:28.828641 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:28.828948 master-0 kubenswrapper[4409]: E1203 14:26:28.828824 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:28.828948 master-0 kubenswrapper[4409]: E1203 14:26:28.828842 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:28.829073 master-0 kubenswrapper[4409]: E1203 14:26:28.828993 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:28.829120 master-0 kubenswrapper[4409]: E1203 14:26:28.829100 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:28.829239 master-0 kubenswrapper[4409]: E1203 14:26:28.829200 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:28.829239 master-0 kubenswrapper[4409]: E1203 14:26:28.829213 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:28.829327 master-0 kubenswrapper[4409]: E1203 14:26:28.829251 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:28.829327 master-0 kubenswrapper[4409]: E1203 14:26:28.829318 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:28.829460 master-0 kubenswrapper[4409]: E1203 14:26:28.829419 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:28.829460 master-0 kubenswrapper[4409]: E1203 14:26:28.829431 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:28.829563 master-0 kubenswrapper[4409]: E1203 14:26:28.829507 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:28.829600 master-0 kubenswrapper[4409]: E1203 14:26:28.829587 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:28.829650 master-0 kubenswrapper[4409]: E1203 14:26:28.829598 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:28.829718 master-0 kubenswrapper[4409]: E1203 14:26:28.829687 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:28.829718 master-0 kubenswrapper[4409]: E1203 14:26:28.829706 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:28.829816 master-0 kubenswrapper[4409]: E1203 14:26:28.829779 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:28.829874 master-0 kubenswrapper[4409]: E1203 14:26:28.829843 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:28.829923 master-0 kubenswrapper[4409]: E1203 14:26:28.829882 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:28.829923 master-0 kubenswrapper[4409]: E1203 14:26:28.829892 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:28.829923 master-0 kubenswrapper[4409]: E1203 14:26:28.829893 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:28.830054 master-0 kubenswrapper[4409]: E1203 14:26:28.829919 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:28.830090 master-0 kubenswrapper[4409]: E1203 14:26:28.830030 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:28.830119 master-0 kubenswrapper[4409]: E1203 14:26:28.830106 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:28.830147 master-0 kubenswrapper[4409]: E1203 14:26:28.830111 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:28.830147 master-0 kubenswrapper[4409]: E1203 14:26:28.830126 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:28.830219 master-0 kubenswrapper[4409]: E1203 14:26:28.830154 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:28.830345 master-0 kubenswrapper[4409]: E1203 14:26:28.830315 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:28.830436 master-0 kubenswrapper[4409]: E1203 14:26:28.830373 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:28.830474 master-0 kubenswrapper[4409]: E1203 14:26:28.830396 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:28.830474 master-0 kubenswrapper[4409]: E1203 14:26:28.830409 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:28.830474 master-0 kubenswrapper[4409]: E1203 14:26:28.830426 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:28.830568 master-0 kubenswrapper[4409]: E1203 14:26:28.830500 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:28.830568 master-0 kubenswrapper[4409]: E1203 14:26:28.830518 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:28.830568 master-0 kubenswrapper[4409]: E1203 14:26:28.830554 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:28.830680 master-0 kubenswrapper[4409]: E1203 14:26:28.830633 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:28.830680 master-0 kubenswrapper[4409]: E1203 14:26:28.830647 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:28.830680 master-0 kubenswrapper[4409]: E1203 14:26:28.830673 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:28.830788 master-0 kubenswrapper[4409]: E1203 14:26:28.830741 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:28.830788 master-0 kubenswrapper[4409]: E1203 14:26:28.830762 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:28.830880 master-0 kubenswrapper[4409]: E1203 14:26:28.830834 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:29.417862 master-0 kubenswrapper[4409]: I1203 14:26:29.417771 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:29.417862 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:29.417862 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:29.417862 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:29.418949 master-0 kubenswrapper[4409]: I1203 14:26:29.417875 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:29.815642 master-0 kubenswrapper[4409]: I1203 14:26:29.815590 4409 scope.go:117] "RemoveContainer" containerID="aa024d4c0653252afb473b187106942d48c2412c2b937333e81a6fb1ddebaaf4" Dec 03 14:26:30.089308 master-0 kubenswrapper[4409]: I1203 14:26:30.089282 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:26:30.417797 master-0 kubenswrapper[4409]: I1203 14:26:30.417048 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:30.417797 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:30.417797 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:30.417797 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:30.418489 master-0 kubenswrapper[4409]: I1203 14:26:30.417778 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:30.428972 master-0 kubenswrapper[4409]: I1203 14:26:30.428911 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-cb84b9cdf-qn94w_a9b62b2f-1e7a-4f1b-a988-4355d93dda46/machine-approver-controller/6.log" Dec 03 14:26:30.429462 master-0 kubenswrapper[4409]: I1203 14:26:30.429407 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-cb84b9cdf-qn94w" event={"ID":"a9b62b2f-1e7a-4f1b-a988-4355d93dda46","Type":"ContainerStarted","Data":"4f9e8e8a9e457cda341a02d839723f950c90ae39df56abdd439e8e2c39b18443"} Dec 03 14:26:30.815334 master-0 kubenswrapper[4409]: I1203 14:26:30.815268 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:30.815334 master-0 kubenswrapper[4409]: I1203 14:26:30.815317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:30.815770 master-0 kubenswrapper[4409]: I1203 14:26:30.815370 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:30.815770 master-0 kubenswrapper[4409]: I1203 14:26:30.815765 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:30.815889 master-0 kubenswrapper[4409]: I1203 14:26:30.815389 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:30.815984 master-0 kubenswrapper[4409]: I1203 14:26:30.815792 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:30.816079 master-0 kubenswrapper[4409]: E1203 14:26:30.816023 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:30.816079 master-0 kubenswrapper[4409]: I1203 14:26:30.815494 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:30.816079 master-0 kubenswrapper[4409]: I1203 14:26:30.815317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.816071 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815562 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815604 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815565 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815607 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815625 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:30.816178 master-0 kubenswrapper[4409]: I1203 14:26:30.815627 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815630 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815638 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815647 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815649 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815654 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815661 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815659 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815665 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815669 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: E1203 14:26:30.816320 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815681 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815684 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815693 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:30.816402 master-0 kubenswrapper[4409]: I1203 14:26:30.815690 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815701 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: E1203 14:26:30.816483 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815713 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815716 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815724 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815725 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815727 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815728 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815746 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815760 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: E1203 14:26:30.816680 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815781 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815803 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815808 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: E1203 14:26:30.815478 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815834 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815849 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: E1203 14:26:30.815834 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815862 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815871 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:30.816831 master-0 kubenswrapper[4409]: I1203 14:26:30.815877 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.816861 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815890 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815895 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815900 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815905 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815906 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815906 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815923 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815928 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815931 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815935 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815935 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815946 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815954 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815975 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815700 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.816045 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815502 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815554 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.817126 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815670 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.816043 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815437 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: I1203 14:26:30.815883 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.816212 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.817201 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.817277 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.817361 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:30.817444 master-0 kubenswrapper[4409]: E1203 14:26:30.817432 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817521 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817586 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817643 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817703 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817762 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817820 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817871 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817915 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.817975 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818055 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818113 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818155 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818205 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818251 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:30.818297 master-0 kubenswrapper[4409]: E1203 14:26:30.818296 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818344 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818392 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818443 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818490 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818537 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818591 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818631 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818686 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:30.818767 master-0 kubenswrapper[4409]: E1203 14:26:30.818743 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:30.819065 master-0 kubenswrapper[4409]: E1203 14:26:30.818793 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:30.819065 master-0 kubenswrapper[4409]: E1203 14:26:30.818833 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:30.819065 master-0 kubenswrapper[4409]: E1203 14:26:30.818899 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:30.819065 master-0 kubenswrapper[4409]: E1203 14:26:30.819049 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:30.819198 master-0 kubenswrapper[4409]: E1203 14:26:30.819176 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819319 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819397 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819443 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819486 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819534 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819607 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819715 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819804 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819878 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819947 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.819993 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820073 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820144 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820392 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820475 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820540 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820597 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820647 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820716 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820771 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820848 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820933 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.820994 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.821068 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.821124 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:30.821934 master-0 kubenswrapper[4409]: E1203 14:26:30.821177 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:31.415937 master-0 kubenswrapper[4409]: I1203 14:26:31.415870 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:31.415937 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:31.415937 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:31.415937 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:31.415937 master-0 kubenswrapper[4409]: I1203 14:26:31.415936 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:32.415650 master-0 kubenswrapper[4409]: I1203 14:26:32.415539 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:32.415650 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:32.415650 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:32.415650 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:32.416338 master-0 kubenswrapper[4409]: I1203 14:26:32.415729 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:32.814967 master-0 kubenswrapper[4409]: I1203 14:26:32.814923 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.814994 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.815092 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.815108 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.814937 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.815159 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.815221 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:32.815228 master-0 kubenswrapper[4409]: I1203 14:26:32.815234 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815190 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: E1203 14:26:32.815097 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" podUID="f1f2d0e1-eaaf-4037-a976-5fc2a942c50c" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815252 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815316 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815180 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815347 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: E1203 14:26:32.815319 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" podUID="98392f8e-0285-4bc3-95a9-d29033639ca3" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815351 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815379 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815371 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815367 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815319 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815392 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:32.815439 master-0 kubenswrapper[4409]: I1203 14:26:32.815435 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:32.815924 master-0 kubenswrapper[4409]: I1203 14:26:32.814956 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:32.816049 master-0 kubenswrapper[4409]: I1203 14:26:32.816017 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:32.816109 master-0 kubenswrapper[4409]: E1203 14:26:32.815976 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" podUID="c180b512-bf0c-4ddc-a5cf-f04acc830a61" Dec 03 14:26:32.816109 master-0 kubenswrapper[4409]: I1203 14:26:32.816066 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:32.816188 master-0 kubenswrapper[4409]: I1203 14:26:32.816143 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:32.816188 master-0 kubenswrapper[4409]: I1203 14:26:32.816151 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:32.816188 master-0 kubenswrapper[4409]: I1203 14:26:32.816173 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816190 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816217 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816227 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816250 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816275 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:32.816313 master-0 kubenswrapper[4409]: I1203 14:26:32.816288 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: E1203 14:26:32.816342 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" podUID="36da3c2f-860c-4188-a7d7-5b615981a835" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816353 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816387 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816405 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816436 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816456 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816464 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:32.816550 master-0 kubenswrapper[4409]: I1203 14:26:32.816528 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:32.816856 master-0 kubenswrapper[4409]: I1203 14:26:32.816617 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:32.816981 master-0 kubenswrapper[4409]: I1203 14:26:32.816936 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:32.817084 master-0 kubenswrapper[4409]: I1203 14:26:32.816994 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:32.817084 master-0 kubenswrapper[4409]: E1203 14:26:32.816996 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" podUID="52100521-67e9-40c9-887c-eda6560f06e0" Dec 03 14:26:32.817084 master-0 kubenswrapper[4409]: I1203 14:26:32.817071 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:32.817084 master-0 kubenswrapper[4409]: I1203 14:26:32.817079 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:32.817238 master-0 kubenswrapper[4409]: I1203 14:26:32.817123 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:32.817238 master-0 kubenswrapper[4409]: I1203 14:26:32.817131 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:32.817238 master-0 kubenswrapper[4409]: I1203 14:26:32.817172 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:32.817238 master-0 kubenswrapper[4409]: I1203 14:26:32.817214 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:32.817409 master-0 kubenswrapper[4409]: I1203 14:26:32.817262 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:32.817409 master-0 kubenswrapper[4409]: I1203 14:26:32.815423 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817664 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.817722 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="56649bd4-ac30-4a70-8024-772294fede88" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817744 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817770 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817790 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817835 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817846 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.817910 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818052 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818052 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818076 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818097 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818080 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818049 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" podUID="09b7b0c6-47cc-4860-8c78-9583bb5b0a6e" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818117 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818163 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818119 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818276 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: I1203 14:26:32.818317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818405 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" podUID="b051ae27-7879-448d-b426-4dce76e29739" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818545 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" podUID="803897bb-580e-4f7a-9be2-583fc607d1f6" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818578 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" podUID="6b95a5a6-db93-4a58-aaff-3619d130c8cb" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818722 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" podUID="6f723d97-5c65-4ae7-9085-26db8b4f2f52" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818815 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" podUID="44af6af5-cecb-4dc4-b793-e8e350f8a47d" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.818905 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" podUID="5aa67ace-d03a-4d06-9fb5-24777b65f2cc" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819056 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" podUID="690d1f81-7b1f-4fd0-9b6e-154c9687c744" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819172 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ch7xd" podUID="b3c1ebb9-f052-410b-a999-45e9b75b0e58" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819330 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" podUID="8eee1d96-2f58-41a6-ae51-c158b29fc813" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819420 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" podUID="ab40dfa2-d8f8-4300-8a10-5aa73e1d6294" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819513 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819597 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819715 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" podUID="aa169e84-880b-4e6d-aeee-7ebfa1f613d2" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.819937 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" podUID="8a12409a-0be3-4023-9df3-a0f091aac8dc" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.820073 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.820294 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" podUID="b1b3ab29-77cf-48ac-8881-846c46bb9048" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.820566 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.820898 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" podUID="829d285f-d532-45e4-b1ec-54adbc21b9f9" Dec 03 14:26:32.821029 master-0 kubenswrapper[4409]: E1203 14:26:32.821057 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" podUID="b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.821450 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="5d838c1a-22e2-4096-9739-7841ef7d06ba" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.821557 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" podUID="1c562495-1290-4792-b4b2-639faa594ae2" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.821725 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.821880 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" podUID="c95705e3-17ef-40fe-89e8-22586a32621b" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.821970 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-pcchm" podUID="6d38d102-4efe-4ed3-ae23-b1e295cdaccd" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.822304 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" podUID="06d774e5-314a-49df-bdca-8e780c9af25a" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.822425 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" podUID="04e9e2a5-cdc2-42af-ab2c-49525390be6d" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.822513 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" podUID="ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: E1203 14:26:32.822871 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" podUID="eefee934-ac6b-44e3-a6be-1ae62362ab4f" Dec 03 14:26:32.822960 master-0 kubenswrapper[4409]: I1203 14:26:32.822915 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:32.824654 master-0 kubenswrapper[4409]: E1203 14:26:32.824605 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" podUID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" Dec 03 14:26:32.824737 master-0 kubenswrapper[4409]: E1203 14:26:32.824636 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-6z4sc" podUID="911f6333-cdb0-425c-b79b-f892444b7097" Dec 03 14:26:32.825746 master-0 kubenswrapper[4409]: E1203 14:26:32.825673 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" podUID="0535e784-8e28-4090-aa2e-df937910767c" Dec 03 14:26:32.825746 master-0 kubenswrapper[4409]: E1203 14:26:32.825704 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" podUID="b340553b-d483-4839-8328-518f27770832" Dec 03 14:26:32.825946 master-0 kubenswrapper[4409]: E1203 14:26:32.825822 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" podUID="24dfafc9-86a9-450e-ac62-a871138106c0" Dec 03 14:26:32.826226 master-0 kubenswrapper[4409]: E1203 14:26:32.826159 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" podUID="f8c6a484-5f0e-4abc-bc48-934ad0ffde0a" Dec 03 14:26:32.826388 master-0 kubenswrapper[4409]: E1203 14:26:32.826349 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" podUID="69b752ed-691c-4574-a01e-428d4bf85b75" Dec 03 14:26:32.826484 master-0 kubenswrapper[4409]: E1203 14:26:32.826460 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" Dec 03 14:26:32.826596 master-0 kubenswrapper[4409]: E1203 14:26:32.826566 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" podUID="8c6fa89f-268c-477b-9f04-238d2305cc89" Dec 03 14:26:32.826712 master-0 kubenswrapper[4409]: E1203 14:26:32.826651 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" podUID="33a557d1-cdd9-47ff-afbd-a301e7f589a7" Dec 03 14:26:32.826787 master-0 kubenswrapper[4409]: E1203 14:26:32.826766 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" podUID="7663a25e-236d-4b1d-83ce-733ab146dee3" Dec 03 14:26:32.826879 master-0 kubenswrapper[4409]: E1203 14:26:32.826858 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" podUID="4df2889c-99f7-402a-9d50-18ccf427179c" Dec 03 14:26:32.826943 master-0 kubenswrapper[4409]: E1203 14:26:32.826924 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver/installer-6-master-0" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" Dec 03 14:26:32.827030 master-0 kubenswrapper[4409]: E1203 14:26:32.826994 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" podUID="918ff36b-662f-46ae-b71a-301df7e67735" Dec 03 14:26:32.827127 master-0 kubenswrapper[4409]: E1203 14:26:32.827109 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-ddwmn" podUID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" Dec 03 14:26:32.827214 master-0 kubenswrapper[4409]: E1203 14:26:32.827195 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" Dec 03 14:26:32.827289 master-0 kubenswrapper[4409]: E1203 14:26:32.827266 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-vkpv4" podUID="e3675c78-1902-4b92-8a93-cf2dc316f060" Dec 03 14:26:32.827359 master-0 kubenswrapper[4409]: E1203 14:26:32.827340 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" podUID="a8dc6511-7339-4269-9d43-14ce53bb4e7f" Dec 03 14:26:32.827443 master-0 kubenswrapper[4409]: E1203 14:26:32.827418 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" podUID="e9f484c1-1564-49c7-a43d-bd8b971cea20" Dec 03 14:26:32.827507 master-0 kubenswrapper[4409]: E1203 14:26:32.827482 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler/installer-6-master-0" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" Dec 03 14:26:32.827591 master-0 kubenswrapper[4409]: E1203 14:26:32.827573 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" podUID="74e39dce-29d5-4b2a-ab19-386b6cdae94d" Dec 03 14:26:32.827673 master-0 kubenswrapper[4409]: E1203 14:26:32.827654 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" podUID="bcc78129-4a81-410e-9a42-b12043b5a75a" Dec 03 14:26:32.827750 master-0 kubenswrapper[4409]: E1203 14:26:32.827733 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" podUID="82bd0ae5-b35d-47c8-b693-b27a9a56476d" Dec 03 14:26:32.827816 master-0 kubenswrapper[4409]: E1203 14:26:32.827796 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" Dec 03 14:26:32.827894 master-0 kubenswrapper[4409]: E1203 14:26:32.827872 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" podUID="faa79e15-1875-4865-b5e0-aecd4c447bad" Dec 03 14:26:32.828071 master-0 kubenswrapper[4409]: E1203 14:26:32.828042 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" podUID="a5b3c1fb-6f81-4067-98da-681d6c7c33e4" Dec 03 14:26:32.828201 master-0 kubenswrapper[4409]: E1203 14:26:32.828178 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7fwtv" podUID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" Dec 03 14:26:32.828352 master-0 kubenswrapper[4409]: E1203 14:26:32.828329 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" podUID="adbcce01-7282-4a75-843a-9623060346f0" Dec 03 14:26:32.828418 master-0 kubenswrapper[4409]: E1203 14:26:32.828397 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" podUID="d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff" Dec 03 14:26:32.828475 master-0 kubenswrapper[4409]: E1203 14:26:32.828459 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" podUID="63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4" Dec 03 14:26:32.828613 master-0 kubenswrapper[4409]: E1203 14:26:32.828278 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" podUID="e89bc996-818b-46b9-ad39-a12457acd4bb" Dec 03 14:26:32.828661 master-0 kubenswrapper[4409]: E1203 14:26:32.828625 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" podUID="b02244d0-f4ef-4702-950d-9e3fb5ced128" Dec 03 14:26:32.828932 master-0 kubenswrapper[4409]: E1203 14:26:32.828897 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-5m4f8" podUID="4669137a-fbc4-41e1-8eeb-5f06b9da2641" Dec 03 14:26:33.416392 master-0 kubenswrapper[4409]: I1203 14:26:33.416337 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:33.416392 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:33.416392 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:33.416392 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:33.416950 master-0 kubenswrapper[4409]: I1203 14:26:33.416397 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:34.415587 master-0 kubenswrapper[4409]: I1203 14:26:34.415520 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:34.415587 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:34.415587 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:34.415587 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:34.415928 master-0 kubenswrapper[4409]: I1203 14:26:34.415598 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:34.819795 master-0 kubenswrapper[4409]: I1203 14:26:34.819644 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:34.819795 master-0 kubenswrapper[4409]: I1203 14:26:34.819715 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:34.819795 master-0 kubenswrapper[4409]: I1203 14:26:34.819674 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.819924 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.819942 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820034 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820066 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820066 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820096 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820150 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820157 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820179 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820187 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820219 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820237 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820251 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820252 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820280 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820290 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.819663 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820408 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820463 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820477 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820497 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820512 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820519 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820536 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820565 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820568 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820583 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820623 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:34.820622 master-0 kubenswrapper[4409]: I1203 14:26:34.820648 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.820650 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821047 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821099 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821145 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821212 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821246 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821271 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821289 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821259 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821318 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821338 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821362 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.820656 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821394 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821463 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821511 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821607 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821681 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:34.821724 master-0 kubenswrapper[4409]: I1203 14:26:34.821726 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821815 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821838 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821857 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821911 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821950 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.821994 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822032 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822038 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822052 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822103 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822141 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:34.822504 master-0 kubenswrapper[4409]: I1203 14:26:34.822251 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:34.822933 master-0 kubenswrapper[4409]: I1203 14:26:34.822563 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:34.822933 master-0 kubenswrapper[4409]: I1203 14:26:34.822720 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:34.823431 master-0 kubenswrapper[4409]: I1203 14:26:34.823377 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.824544 master-0 kubenswrapper[4409]: I1203 14:26:34.824082 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:26:34.824544 master-0 kubenswrapper[4409]: I1203 14:26:34.824087 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:34.824544 master-0 kubenswrapper[4409]: I1203 14:26:34.824243 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.824544 master-0 kubenswrapper[4409]: I1203 14:26:34.824265 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 14:26:34.826451 master-0 kubenswrapper[4409]: I1203 14:26:34.824842 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:34.826451 master-0 kubenswrapper[4409]: I1203 14:26:34.824876 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:34.837135 master-0 kubenswrapper[4409]: I1203 14:26:34.837083 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 14:26:34.837389 master-0 kubenswrapper[4409]: I1203 14:26:34.837179 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.837389 master-0 kubenswrapper[4409]: I1203 14:26:34.837329 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 14:26:34.837590 master-0 kubenswrapper[4409]: I1203 14:26:34.837561 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 14:26:34.837901 master-0 kubenswrapper[4409]: I1203 14:26:34.837831 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 14:26:34.838275 master-0 kubenswrapper[4409]: I1203 14:26:34.838151 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 14:26:34.838275 master-0 kubenswrapper[4409]: I1203 14:26:34.838223 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840480 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840503 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840530 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7ctx2" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840735 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840747 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840734 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.840946 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 14:26:34.841067 master-0 kubenswrapper[4409]: I1203 14:26:34.841059 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841151 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841198 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841225 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841834 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841849 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841921 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 14:26:34.841983 master-0 kubenswrapper[4409]: I1203 14:26:34.841850 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 14:26:34.842229 master-0 kubenswrapper[4409]: I1203 14:26:34.841995 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 14:26:34.842229 master-0 kubenswrapper[4409]: I1203 14:26:34.842090 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 14:26:34.842229 master-0 kubenswrapper[4409]: I1203 14:26:34.842163 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 03 14:26:34.842229 master-0 kubenswrapper[4409]: I1203 14:26:34.842205 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.842229 master-0 kubenswrapper[4409]: I1203 14:26:34.842230 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842276 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842280 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842315 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bdlwz" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842375 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842091 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.842441 master-0 kubenswrapper[4409]: I1203 14:26:34.842400 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 14:26:34.842666 master-0 kubenswrapper[4409]: I1203 14:26:34.842543 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 14:26:34.842666 master-0 kubenswrapper[4409]: I1203 14:26:34.842661 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 14:26:34.842744 master-0 kubenswrapper[4409]: I1203 14:26:34.842685 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 14:26:34.842912 master-0 kubenswrapper[4409]: I1203 14:26:34.842834 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-twpdm" Dec 03 14:26:34.842912 master-0 kubenswrapper[4409]: I1203 14:26:34.842894 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 03 14:26:34.843036 master-0 kubenswrapper[4409]: I1203 14:26:34.842945 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.843090 master-0 kubenswrapper[4409]: I1203 14:26:34.843046 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 14:26:34.843145 master-0 kubenswrapper[4409]: I1203 14:26:34.843103 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 14:26:34.843145 master-0 kubenswrapper[4409]: I1203 14:26:34.843123 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 03 14:26:34.843238 master-0 kubenswrapper[4409]: I1203 14:26:34.843204 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 14:26:34.843238 master-0 kubenswrapper[4409]: I1203 14:26:34.843234 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 14:26:34.843331 master-0 kubenswrapper[4409]: I1203 14:26:34.843298 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 14:26:34.843372 master-0 kubenswrapper[4409]: I1203 14:26:34.843351 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:26:34.843417 master-0 kubenswrapper[4409]: I1203 14:26:34.843390 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 14:26:34.843459 master-0 kubenswrapper[4409]: I1203 14:26:34.843434 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 03 14:26:34.843516 master-0 kubenswrapper[4409]: I1203 14:26:34.843470 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:26:34.843549 master-0 kubenswrapper[4409]: I1203 14:26:34.843533 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 14:26:34.843549 master-0 kubenswrapper[4409]: I1203 14:26:34.843541 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8zh52" Dec 03 14:26:34.843620 master-0 kubenswrapper[4409]: I1203 14:26:34.843571 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 14:26:34.843670 master-0 kubenswrapper[4409]: I1203 14:26:34.843643 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 14:26:34.843709 master-0 kubenswrapper[4409]: I1203 14:26:34.843680 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 14:26:34.843709 master-0 kubenswrapper[4409]: I1203 14:26:34.843698 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 03 14:26:34.843789 master-0 kubenswrapper[4409]: I1203 14:26:34.843750 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 14:26:34.843789 master-0 kubenswrapper[4409]: I1203 14:26:34.843781 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:26:34.843852 master-0 kubenswrapper[4409]: I1203 14:26:34.843806 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 14:26:34.843852 master-0 kubenswrapper[4409]: I1203 14:26:34.843823 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 14:26:34.843852 master-0 kubenswrapper[4409]: I1203 14:26:34.843838 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 14:26:34.843936 master-0 kubenswrapper[4409]: I1203 14:26:34.843900 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 14:26:34.843936 master-0 kubenswrapper[4409]: I1203 14:26:34.843912 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 14:26:34.843936 master-0 kubenswrapper[4409]: I1203 14:26:34.843917 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 14:26:34.844032 master-0 kubenswrapper[4409]: I1203 14:26:34.843942 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 14:26:34.844080 master-0 kubenswrapper[4409]: I1203 14:26:34.844032 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 14:26:34.844080 master-0 kubenswrapper[4409]: I1203 14:26:34.844040 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 14:26:34.844080 master-0 kubenswrapper[4409]: I1203 14:26:34.844044 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 14:26:34.844080 master-0 kubenswrapper[4409]: I1203 14:26:34.844071 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 14:26:34.844391 master-0 kubenswrapper[4409]: I1203 14:26:34.844355 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 14:26:34.844391 master-0 kubenswrapper[4409]: I1203 14:26:34.844376 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 14:26:34.844391 master-0 kubenswrapper[4409]: I1203 14:26:34.844385 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.844705 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.844847 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.844969 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.844993 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.845078 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 14:26:34.845390 master-0 kubenswrapper[4409]: I1203 14:26:34.845199 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 14:26:34.846217 master-0 kubenswrapper[4409]: I1203 14:26:34.845891 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 14:26:34.846217 master-0 kubenswrapper[4409]: I1203 14:26:34.845891 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 14:26:34.846217 master-0 kubenswrapper[4409]: I1203 14:26:34.846177 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 14:26:34.846417 master-0 kubenswrapper[4409]: I1203 14:26:34.846333 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 14:26:34.846417 master-0 kubenswrapper[4409]: I1203 14:26:34.846361 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 14:26:34.846587 master-0 kubenswrapper[4409]: I1203 14:26:34.846452 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 14:26:34.846962 master-0 kubenswrapper[4409]: I1203 14:26:34.846943 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 14:26:34.847155 master-0 kubenswrapper[4409]: I1203 14:26:34.847132 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 14:26:34.847249 master-0 kubenswrapper[4409]: I1203 14:26:34.847194 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 14:26:34.849343 master-0 kubenswrapper[4409]: I1203 14:26:34.849101 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 03 14:26:34.850308 master-0 kubenswrapper[4409]: I1203 14:26:34.850271 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 14:26:34.851303 master-0 kubenswrapper[4409]: I1203 14:26:34.851266 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bc14vqi7sofg" Dec 03 14:26:34.852967 master-0 kubenswrapper[4409]: I1203 14:26:34.852936 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:26:34.855511 master-0 kubenswrapper[4409]: I1203 14:26:34.855434 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 14:26:34.863884 master-0 kubenswrapper[4409]: I1203 14:26:34.863828 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 14:26:34.864376 master-0 kubenswrapper[4409]: I1203 14:26:34.864174 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 14:26:34.864697 master-0 kubenswrapper[4409]: I1203 14:26:34.864614 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 14:26:34.868036 master-0 kubenswrapper[4409]: I1203 14:26:34.867665 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 14:26:34.871560 master-0 kubenswrapper[4409]: I1203 14:26:34.871516 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 14:26:34.873955 master-0 kubenswrapper[4409]: I1203 14:26:34.873092 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 14:26:34.878176 master-0 kubenswrapper[4409]: I1203 14:26:34.875368 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:26:34.883167 master-0 kubenswrapper[4409]: I1203 14:26:34.883062 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:26:34.903365 master-0 kubenswrapper[4409]: I1203 14:26:34.903323 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 14:26:34.922836 master-0 kubenswrapper[4409]: I1203 14:26:34.922781 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7n524" Dec 03 14:26:34.942069 master-0 kubenswrapper[4409]: I1203 14:26:34.942025 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 14:26:34.962577 master-0 kubenswrapper[4409]: I1203 14:26:34.962516 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 14:26:34.983103 master-0 kubenswrapper[4409]: I1203 14:26:34.983044 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 14:26:35.002925 master-0 kubenswrapper[4409]: I1203 14:26:35.002873 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 14:26:35.030715 master-0 kubenswrapper[4409]: I1203 14:26:35.030671 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 14:26:35.043032 master-0 kubenswrapper[4409]: I1203 14:26:35.042969 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:26:35.063394 master-0 kubenswrapper[4409]: I1203 14:26:35.062938 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:26:35.082203 master-0 kubenswrapper[4409]: I1203 14:26:35.082179 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 14:26:35.103047 master-0 kubenswrapper[4409]: I1203 14:26:35.102955 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-cb9jg" Dec 03 14:26:35.123220 master-0 kubenswrapper[4409]: I1203 14:26:35.123161 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-n8h5v" Dec 03 14:26:35.144989 master-0 kubenswrapper[4409]: I1203 14:26:35.142834 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-2blfd" Dec 03 14:26:35.162764 master-0 kubenswrapper[4409]: I1203 14:26:35.162743 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 14:26:35.182360 master-0 kubenswrapper[4409]: I1203 14:26:35.182310 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 14:26:35.203110 master-0 kubenswrapper[4409]: I1203 14:26:35.203045 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 14:26:35.222430 master-0 kubenswrapper[4409]: I1203 14:26:35.222386 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 14:26:35.242939 master-0 kubenswrapper[4409]: I1203 14:26:35.242904 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 14:26:35.262657 master-0 kubenswrapper[4409]: I1203 14:26:35.262619 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 14:26:35.283001 master-0 kubenswrapper[4409]: I1203 14:26:35.282951 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 14:26:35.310818 master-0 kubenswrapper[4409]: I1203 14:26:35.310757 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 14:26:35.331151 master-0 kubenswrapper[4409]: I1203 14:26:35.330974 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" Dec 03 14:26:35.342499 master-0 kubenswrapper[4409]: I1203 14:26:35.342431 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 14:26:35.362814 master-0 kubenswrapper[4409]: I1203 14:26:35.362754 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 14:26:35.385559 master-0 kubenswrapper[4409]: I1203 14:26:35.385430 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 14:26:35.402711 master-0 kubenswrapper[4409]: I1203 14:26:35.402649 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Dec 03 14:26:35.415239 master-0 kubenswrapper[4409]: I1203 14:26:35.415180 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:35.415239 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:35.415239 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:35.415239 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:35.415605 master-0 kubenswrapper[4409]: I1203 14:26:35.415262 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:35.422641 master-0 kubenswrapper[4409]: I1203 14:26:35.422596 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-fwsd5" Dec 03 14:26:35.442161 master-0 kubenswrapper[4409]: I1203 14:26:35.442119 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Dec 03 14:26:35.462635 master-0 kubenswrapper[4409]: I1203 14:26:35.462595 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 14:26:35.482303 master-0 kubenswrapper[4409]: I1203 14:26:35.482255 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 14:26:35.502834 master-0 kubenswrapper[4409]: I1203 14:26:35.502795 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 14:26:35.522399 master-0 kubenswrapper[4409]: I1203 14:26:35.522341 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 14:26:35.542337 master-0 kubenswrapper[4409]: I1203 14:26:35.542290 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Dec 03 14:26:35.562503 master-0 kubenswrapper[4409]: I1203 14:26:35.562376 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 14:26:35.844591 master-0 kubenswrapper[4409]: I1203 14:26:35.844507 4409 request.go:700] Waited for 1.006589514s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-jmtqw&limit=500&resourceVersion=0 Dec 03 14:26:35.889681 master-0 kubenswrapper[4409]: I1203 14:26:35.887080 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Dec 03 14:26:35.890620 master-0 kubenswrapper[4409]: I1203 14:26:35.889953 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 14:26:35.890620 master-0 kubenswrapper[4409]: I1203 14:26:35.890304 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 14:26:35.890620 master-0 kubenswrapper[4409]: I1203 14:26:35.890303 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 14:26:35.896124 master-0 kubenswrapper[4409]: I1203 14:26:35.891075 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Dec 03 14:26:35.897556 master-0 kubenswrapper[4409]: I1203 14:26:35.897488 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 03 14:26:35.897556 master-0 kubenswrapper[4409]: I1203 14:26:35.897500 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 14:26:35.897989 master-0 kubenswrapper[4409]: I1203 14:26:35.897924 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 14:26:35.897989 master-0 kubenswrapper[4409]: I1203 14:26:35.897963 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 14:26:35.898101 master-0 kubenswrapper[4409]: I1203 14:26:35.898067 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 14:26:35.898165 master-0 kubenswrapper[4409]: I1203 14:26:35.898142 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 03 14:26:35.898219 master-0 kubenswrapper[4409]: I1203 14:26:35.897953 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 03 14:26:35.900371 master-0 kubenswrapper[4409]: I1203 14:26:35.898891 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-jmtqw" Dec 03 14:26:35.902973 master-0 kubenswrapper[4409]: I1203 14:26:35.901135 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 14:26:35.902973 master-0 kubenswrapper[4409]: I1203 14:26:35.901578 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-js47f" Dec 03 14:26:35.902973 master-0 kubenswrapper[4409]: I1203 14:26:35.901814 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 14:26:35.903463 master-0 kubenswrapper[4409]: I1203 14:26:35.903234 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 14:26:35.923308 master-0 kubenswrapper[4409]: I1203 14:26:35.923263 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 03 14:26:35.943687 master-0 kubenswrapper[4409]: I1203 14:26:35.943623 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-x2zgz" Dec 03 14:26:35.963862 master-0 kubenswrapper[4409]: I1203 14:26:35.963804 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 14:26:35.982968 master-0 kubenswrapper[4409]: I1203 14:26:35.982921 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 14:26:36.003764 master-0 kubenswrapper[4409]: I1203 14:26:36.003720 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 14:26:36.023415 master-0 kubenswrapper[4409]: I1203 14:26:36.023344 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 14:26:36.043552 master-0 kubenswrapper[4409]: I1203 14:26:36.043503 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 14:26:36.062086 master-0 kubenswrapper[4409]: I1203 14:26:36.061467 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 14:26:36.083367 master-0 kubenswrapper[4409]: I1203 14:26:36.083298 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 14:26:36.103172 master-0 kubenswrapper[4409]: I1203 14:26:36.103030 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 14:26:36.123061 master-0 kubenswrapper[4409]: I1203 14:26:36.122970 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 14:26:36.143392 master-0 kubenswrapper[4409]: I1203 14:26:36.143318 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 14:26:36.162680 master-0 kubenswrapper[4409]: I1203 14:26:36.162589 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 14:26:36.182905 master-0 kubenswrapper[4409]: I1203 14:26:36.182830 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 14:26:36.202312 master-0 kubenswrapper[4409]: I1203 14:26:36.202242 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 14:26:36.222172 master-0 kubenswrapper[4409]: I1203 14:26:36.222099 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2fgkw" Dec 03 14:26:36.242559 master-0 kubenswrapper[4409]: I1203 14:26:36.242503 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 14:26:36.262120 master-0 kubenswrapper[4409]: I1203 14:26:36.262071 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 14:26:36.282718 master-0 kubenswrapper[4409]: I1203 14:26:36.282655 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:26:36.302436 master-0 kubenswrapper[4409]: I1203 14:26:36.302363 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 14:26:36.331449 master-0 kubenswrapper[4409]: I1203 14:26:36.331389 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 14:26:36.346406 master-0 kubenswrapper[4409]: I1203 14:26:36.346063 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:26:36.363032 master-0 kubenswrapper[4409]: I1203 14:26:36.362901 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:26:36.383525 master-0 kubenswrapper[4409]: I1203 14:26:36.383416 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 14:26:36.402795 master-0 kubenswrapper[4409]: I1203 14:26:36.402753 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 14:26:36.415771 master-0 kubenswrapper[4409]: I1203 14:26:36.415726 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:36.415771 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:36.415771 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:36.415771 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:36.416297 master-0 kubenswrapper[4409]: I1203 14:26:36.415793 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:36.423104 master-0 kubenswrapper[4409]: I1203 14:26:36.423068 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 14:26:36.443526 master-0 kubenswrapper[4409]: I1203 14:26:36.442545 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:26:36.462376 master-0 kubenswrapper[4409]: I1203 14:26:36.462325 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 14:26:36.482555 master-0 kubenswrapper[4409]: I1203 14:26:36.482523 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 14:26:36.503403 master-0 kubenswrapper[4409]: I1203 14:26:36.503343 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 14:26:36.523225 master-0 kubenswrapper[4409]: I1203 14:26:36.523147 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:26:36.543700 master-0 kubenswrapper[4409]: I1203 14:26:36.543641 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 14:26:36.574478 master-0 kubenswrapper[4409]: I1203 14:26:36.573808 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 14:26:36.583086 master-0 kubenswrapper[4409]: I1203 14:26:36.583025 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 14:26:36.601774 master-0 kubenswrapper[4409]: I1203 14:26:36.601725 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 14:26:36.622142 master-0 kubenswrapper[4409]: I1203 14:26:36.621918 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 14:26:36.643030 master-0 kubenswrapper[4409]: I1203 14:26:36.642953 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 14:26:36.662875 master-0 kubenswrapper[4409]: I1203 14:26:36.662815 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nqkqh" Dec 03 14:26:36.684681 master-0 kubenswrapper[4409]: I1203 14:26:36.684613 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 14:26:36.703981 master-0 kubenswrapper[4409]: I1203 14:26:36.703487 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:26:36.730485 master-0 kubenswrapper[4409]: I1203 14:26:36.730408 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:26:36.742850 master-0 kubenswrapper[4409]: I1203 14:26:36.742570 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:26:36.762811 master-0 kubenswrapper[4409]: I1203 14:26:36.762748 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 14:26:36.782892 master-0 kubenswrapper[4409]: I1203 14:26:36.782828 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 14:26:36.803238 master-0 kubenswrapper[4409]: I1203 14:26:36.803170 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 14:26:36.822967 master-0 kubenswrapper[4409]: I1203 14:26:36.822914 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 14:26:36.842294 master-0 kubenswrapper[4409]: I1203 14:26:36.842248 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 14:26:36.860818 master-0 kubenswrapper[4409]: I1203 14:26:36.860760 4409 request.go:700] Waited for 2.021195197s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 03 14:26:36.863964 master-0 kubenswrapper[4409]: I1203 14:26:36.863937 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:26:36.883763 master-0 kubenswrapper[4409]: I1203 14:26:36.883635 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 14:26:36.911634 master-0 kubenswrapper[4409]: I1203 14:26:36.911594 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 14:26:36.921732 master-0 kubenswrapper[4409]: I1203 14:26:36.921689 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 14:26:36.944699 master-0 kubenswrapper[4409]: I1203 14:26:36.943186 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 14:26:36.963395 master-0 kubenswrapper[4409]: I1203 14:26:36.963346 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 14:26:36.983591 master-0 kubenswrapper[4409]: I1203 14:26:36.983156 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 14:26:37.003480 master-0 kubenswrapper[4409]: I1203 14:26:37.003418 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:26:37.023324 master-0 kubenswrapper[4409]: I1203 14:26:37.023268 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:26:37.042499 master-0 kubenswrapper[4409]: I1203 14:26:37.042402 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:26:37.067656 master-0 kubenswrapper[4409]: I1203 14:26:37.067609 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:26:37.082484 master-0 kubenswrapper[4409]: I1203 14:26:37.082411 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:26:37.103719 master-0 kubenswrapper[4409]: I1203 14:26:37.103620 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-vpms9" Dec 03 14:26:37.123750 master-0 kubenswrapper[4409]: I1203 14:26:37.123677 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:26:37.142969 master-0 kubenswrapper[4409]: I1203 14:26:37.142820 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:26:37.162999 master-0 kubenswrapper[4409]: I1203 14:26:37.162937 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:26:37.183520 master-0 kubenswrapper[4409]: I1203 14:26:37.183482 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" Dec 03 14:26:37.202666 master-0 kubenswrapper[4409]: I1203 14:26:37.202579 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 14:26:37.222512 master-0 kubenswrapper[4409]: I1203 14:26:37.222442 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 14:26:37.247133 master-0 kubenswrapper[4409]: I1203 14:26:37.244644 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 03 14:26:37.291032 master-0 kubenswrapper[4409]: I1203 14:26:37.271874 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 14:26:37.305034 master-0 kubenswrapper[4409]: I1203 14:26:37.292551 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 14:26:37.305034 master-0 kubenswrapper[4409]: I1203 14:26:37.303604 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 14:26:37.326030 master-0 kubenswrapper[4409]: I1203 14:26:37.324987 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 03 14:26:37.343810 master-0 kubenswrapper[4409]: I1203 14:26:37.343621 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 14:26:37.363264 master-0 kubenswrapper[4409]: I1203 14:26:37.363207 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 03 14:26:37.383110 master-0 kubenswrapper[4409]: I1203 14:26:37.383067 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 14:26:37.402812 master-0 kubenswrapper[4409]: I1203 14:26:37.402660 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 03 14:26:37.416098 master-0 kubenswrapper[4409]: I1203 14:26:37.416050 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:37.416098 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:37.416098 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:37.416098 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:37.416444 master-0 kubenswrapper[4409]: I1203 14:26:37.416125 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:37.423624 master-0 kubenswrapper[4409]: I1203 14:26:37.423587 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 03 14:26:37.442812 master-0 kubenswrapper[4409]: I1203 14:26:37.442753 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 14:26:37.463051 master-0 kubenswrapper[4409]: I1203 14:26:37.462975 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 03 14:26:37.483348 master-0 kubenswrapper[4409]: I1203 14:26:37.483256 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l6rgr" Dec 03 14:26:37.502649 master-0 kubenswrapper[4409]: I1203 14:26:37.502592 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 03 14:26:37.523610 master-0 kubenswrapper[4409]: I1203 14:26:37.523546 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 14:26:37.543181 master-0 kubenswrapper[4409]: I1203 14:26:37.543133 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 14:26:37.562691 master-0 kubenswrapper[4409]: I1203 14:26:37.562625 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 03 14:26:37.583836 master-0 kubenswrapper[4409]: I1203 14:26:37.583740 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 14:26:37.603019 master-0 kubenswrapper[4409]: I1203 14:26:37.602956 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 14:26:37.623191 master-0 kubenswrapper[4409]: I1203 14:26:37.623146 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 14:26:37.643786 master-0 kubenswrapper[4409]: I1203 14:26:37.643762 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 14:26:37.665401 master-0 kubenswrapper[4409]: I1203 14:26:37.665298 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 14:26:37.689105 master-0 kubenswrapper[4409]: I1203 14:26:37.688991 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 14:26:37.703490 master-0 kubenswrapper[4409]: I1203 14:26:37.702506 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 14:26:37.722809 master-0 kubenswrapper[4409]: I1203 14:26:37.722752 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 14:26:37.746096 master-0 kubenswrapper[4409]: I1203 14:26:37.746037 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:26:37.763083 master-0 kubenswrapper[4409]: I1203 14:26:37.762986 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:26:37.782691 master-0 kubenswrapper[4409]: I1203 14:26:37.782623 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:26:37.810343 master-0 kubenswrapper[4409]: I1203 14:26:37.810246 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:26:37.823306 master-0 kubenswrapper[4409]: I1203 14:26:37.823251 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:26:37.843709 master-0 kubenswrapper[4409]: I1203 14:26:37.843610 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wksjv" Dec 03 14:26:37.861193 master-0 kubenswrapper[4409]: I1203 14:26:37.861130 4409 request.go:700] Waited for 3.01781037s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dprometheus-k8s-kube-rbac-proxy-web&limit=500&resourceVersion=0 Dec 03 14:26:37.863572 master-0 kubenswrapper[4409]: I1203 14:26:37.863541 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:26:37.883223 master-0 kubenswrapper[4409]: I1203 14:26:37.883165 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:26:37.903663 master-0 kubenswrapper[4409]: I1203 14:26:37.903579 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:26:37.923053 master-0 kubenswrapper[4409]: I1203 14:26:37.922903 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:26:37.946891 master-0 kubenswrapper[4409]: I1203 14:26:37.946841 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:26:37.970472 master-0 kubenswrapper[4409]: I1203 14:26:37.970413 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:26:37.983801 master-0 kubenswrapper[4409]: I1203 14:26:37.983265 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:26:38.416598 master-0 kubenswrapper[4409]: I1203 14:26:38.416539 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:38.416598 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:38.416598 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:38.416598 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:38.416878 master-0 kubenswrapper[4409]: I1203 14:26:38.416629 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:39.415905 master-0 kubenswrapper[4409]: I1203 14:26:39.415838 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:39.415905 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:39.415905 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:39.415905 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:39.416766 master-0 kubenswrapper[4409]: I1203 14:26:39.415910 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:40.416924 master-0 kubenswrapper[4409]: I1203 14:26:40.416849 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:40.416924 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:40.416924 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:40.416924 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:40.416924 master-0 kubenswrapper[4409]: I1203 14:26:40.416931 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:40.781220 master-0 kubenswrapper[4409]: I1203 14:26:40.781136 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.781220 master-0 kubenswrapper[4409]: I1203 14:26:40.781190 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.781220 master-0 kubenswrapper[4409]: I1203 14:26:40.781216 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.781220 master-0 kubenswrapper[4409]: I1203 14:26:40.781237 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:40.781220 master-0 kubenswrapper[4409]: I1203 14:26:40.781257 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781298 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781357 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781386 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781414 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781442 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781478 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781505 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781532 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781573 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781843 4409 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 03 14:26:40.781865 master-0 kubenswrapper[4409]: I1203 14:26:40.781598 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.781898 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.781920 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782045 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782387 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782432 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782476 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782520 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782550 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782577 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782633 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782663 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782689 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782710 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782729 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782750 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782789 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782807 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782825 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782842 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782874 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782898 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782919 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.782929 master-0 kubenswrapper[4409]: I1203 14:26:40.782958 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.782997 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783051 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783116 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783143 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783187 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783219 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-audit-policies\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783231 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783338 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783372 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783374 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-client-ca\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783430 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783511 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783553 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783585 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783608 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783633 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783655 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783676 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783697 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783719 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783728 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-trusted-ca-bundle\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783741 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783776 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783795 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.783803 master-0 kubenswrapper[4409]: I1203 14:26:40.783817 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783837 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783860 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783884 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783932 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783955 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.783975 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784027 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784048 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784072 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784094 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784123 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784152 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784179 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784218 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784259 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784283 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784342 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784394 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784425 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784485 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784539 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:40.784560 master-0 kubenswrapper[4409]: I1203 14:26:40.784558 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-config\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784570 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784677 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784718 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784729 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-metrics-server-audit-profiles\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784755 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784821 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784865 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784896 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784938 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.784968 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785026 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785061 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785088 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785114 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785139 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785167 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785194 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785363 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:40.785386 master-0 kubenswrapper[4409]: I1203 14:26:40.785393 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785423 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785515 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785587 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785612 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785980 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-service-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.785993 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.786067 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:40.786119 master-0 kubenswrapper[4409]: I1203 14:26:40.786101 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786148 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786179 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786208 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786249 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786278 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786332 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786362 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.786401 master-0 kubenswrapper[4409]: I1203 14:26:40.786390 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786420 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786464 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786494 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786523 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786547 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786583 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786615 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786641 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.786695 master-0 kubenswrapper[4409]: I1203 14:26:40.786692 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786718 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786776 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786833 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786857 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786882 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786911 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.786961 master-0 kubenswrapper[4409]: I1203 14:26:40.786942 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.786976 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787021 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787048 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787118 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787146 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787185 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787224 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787262 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.787311 master-0 kubenswrapper[4409]: I1203 14:26:40.787287 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.787584 master-0 kubenswrapper[4409]: I1203 14:26:40.787492 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.788302 master-0 kubenswrapper[4409]: I1203 14:26:40.787953 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.788302 master-0 kubenswrapper[4409]: I1203 14:26:40.788236 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-login\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.788509 master-0 kubenswrapper[4409]: I1203 14:26:40.788452 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.788625 master-0 kubenswrapper[4409]: I1203 14:26:40.788580 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.788760 master-0 kubenswrapper[4409]: I1203 14:26:40.788659 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-webhook-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:40.788934 master-0 kubenswrapper[4409]: I1203 14:26:40.788895 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.789762 master-0 kubenswrapper[4409]: I1203 14:26:40.789729 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4669137a-fbc4-41e1-8eeb-5f06b9da2641-metrics-tls\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:40.789926 master-0 kubenswrapper[4409]: I1203 14:26:40.789876 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-service-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.789926 master-0 kubenswrapper[4409]: I1203 14:26:40.789899 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-config\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.790322 master-0 kubenswrapper[4409]: I1203 14:26:40.790286 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-serving-ca\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.791969 master-0 kubenswrapper[4409]: I1203 14:26:40.791917 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-etcd-client\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.792230 master-0 kubenswrapper[4409]: I1203 14:26:40.792189 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7663a25e-236d-4b1d-83ce-733ab146dee3-cert\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:40.792322 master-0 kubenswrapper[4409]: I1203 14:26:40.792264 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-proxy-ca-bundles\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.792413 master-0 kubenswrapper[4409]: I1203 14:26:40.792379 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.793201 master-0 kubenswrapper[4409]: I1203 14:26:40.792938 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-serving-cert\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:40.793639 master-0 kubenswrapper[4409]: I1203 14:26:40.793601 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8c6fa89f-268c-477b-9f04-238d2305cc89-proxy-tls\") pod \"machine-config-controller-74cddd4fb5-phk6r\" (UID: \"8c6fa89f-268c-477b-9f04-238d2305cc89\") " pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:40.793703 master-0 kubenswrapper[4409]: I1203 14:26:40.793641 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.794094 master-0 kubenswrapper[4409]: I1203 14:26:40.794060 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/06d774e5-314a-49df-bdca-8e780c9af25a-serving-cert\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:40.795239 master-0 kubenswrapper[4409]: I1203 14:26:40.794605 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-serving-cert\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:40.795239 master-0 kubenswrapper[4409]: I1203 14:26:40.794854 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-web-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.795732 master-0 kubenswrapper[4409]: I1203 14:26:40.795691 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-etcd-client\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.796059 master-0 kubenswrapper[4409]: I1203 14:26:40.795994 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c95705e3-17ef-40fe-89e8-22586a32621b-serving-cert\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.796059 master-0 kubenswrapper[4409]: I1203 14:26:40.796045 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:40.796223 master-0 kubenswrapper[4409]: I1203 14:26:40.796152 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.796534 master-0 kubenswrapper[4409]: I1203 14:26:40.796496 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.796985 master-0 kubenswrapper[4409]: I1203 14:26:40.796945 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/69b752ed-691c-4574-a01e-428d4bf85b75-catalogserver-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:40.797262 master-0 kubenswrapper[4409]: I1203 14:26:40.797208 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/803897bb-580e-4f7a-9be2-583fc607d1f6-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:40.797346 master-0 kubenswrapper[4409]: I1203 14:26:40.797283 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b051ae27-7879-448d-b426-4dce76e29739-serving-cert\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:40.798592 master-0 kubenswrapper[4409]: I1203 14:26:40.798170 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-config\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:40.798980 master-0 kubenswrapper[4409]: I1203 14:26:40.798919 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-config\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.798980 master-0 kubenswrapper[4409]: I1203 14:26:40.798939 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b340553b-d483-4839-8328-518f27770832-samples-operator-tls\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:40.799517 master-0 kubenswrapper[4409]: I1203 14:26:40.799457 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04e9e2a5-cdc2-42af-ab2c-49525390be6d-apiservice-cert\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.799998 master-0 kubenswrapper[4409]: I1203 14:26:40.799956 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e9f484c1-1564-49c7-a43d-bd8b971cea20-machine-api-operator-tls\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.800094 master-0 kubenswrapper[4409]: I1203 14:26:40.800056 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.800316 master-0 kubenswrapper[4409]: I1203 14:26:40.800285 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.800387 master-0 kubenswrapper[4409]: I1203 14:26:40.800309 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/98392f8e-0285-4bc3-95a9-d29033639ca3-metrics-tls\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:40.800387 master-0 kubenswrapper[4409]: I1203 14:26:40.800336 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-server-tls\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.800629 master-0 kubenswrapper[4409]: I1203 14:26:40.800595 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b1b3ab29-77cf-48ac-8881-846c46bb9048-nginx-conf\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:40.800693 master-0 kubenswrapper[4409]: I1203 14:26:40.800599 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-config\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.800741 master-0 kubenswrapper[4409]: I1203 14:26:40.800701 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-ca-certs\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:40.800953 master-0 kubenswrapper[4409]: I1203 14:26:40.800925 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/36da3c2f-860c-4188-a7d7-5b615981a835-signing-cabundle\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:40.801159 master-0 kubenswrapper[4409]: I1203 14:26:40.801112 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-images\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.801248 master-0 kubenswrapper[4409]: I1203 14:26:40.801114 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.801368 master-0 kubenswrapper[4409]: I1203 14:26:40.801342 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-config\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.802131 master-0 kubenswrapper[4409]: I1203 14:26:40.801855 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-srv-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:40.802405 master-0 kubenswrapper[4409]: I1203 14:26:40.802297 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-srv-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:40.802405 master-0 kubenswrapper[4409]: I1203 14:26:40.802325 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.802765 master-0 kubenswrapper[4409]: I1203 14:26:40.802737 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.802926 master-0 kubenswrapper[4409]: I1203 14:26:40.802886 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"multus-admission-controller-84c998f64f-8stq7\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:40.803511 master-0 kubenswrapper[4409]: I1203 14:26:40.803465 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/36da3c2f-860c-4188-a7d7-5b615981a835-signing-key\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:40.803805 master-0 kubenswrapper[4409]: I1203 14:26:40.803784 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-client-ca-bundle\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.804771 master-0 kubenswrapper[4409]: I1203 14:26:40.804713 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-service-ca\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.804878 master-0 kubenswrapper[4409]: I1203 14:26:40.804774 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b95a5a6-db93-4a58-aaff-3619d130c8cb-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:40.805326 master-0 kubenswrapper[4409]: I1203 14:26:40.805303 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.806068 master-0 kubenswrapper[4409]: I1203 14:26:40.805890 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.806068 master-0 kubenswrapper[4409]: I1203 14:26:40.805999 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.806658 master-0 kubenswrapper[4409]: I1203 14:26:40.806158 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-image-import-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.806658 master-0 kubenswrapper[4409]: I1203 14:26:40.806471 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-telemetry-config\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:40.806658 master-0 kubenswrapper[4409]: I1203 14:26:40.806539 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.807363 master-0 kubenswrapper[4409]: I1203 14:26:40.806966 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-federate-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.807363 master-0 kubenswrapper[4409]: I1203 14:26:40.807146 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b051ae27-7879-448d-b426-4dce76e29739-config\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:40.807719 master-0 kubenswrapper[4409]: I1203 14:26:40.807560 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c562495-1290-4792-b4b2-639faa594ae2-serving-cert\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:40.807820 master-0 kubenswrapper[4409]: I1203 14:26:40.807783 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-profile-collector-cert\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:40.808129 master-0 kubenswrapper[4409]: I1203 14:26:40.807989 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b1b3ab29-77cf-48ac-8881-846c46bb9048-networking-console-plugin-cert\") pod \"networking-console-plugin-7c696657b7-452tx\" (UID: \"b1b3ab29-77cf-48ac-8881-846c46bb9048\") " pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:40.808235 master-0 kubenswrapper[4409]: I1203 14:26:40.808145 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.808235 master-0 kubenswrapper[4409]: I1203 14:26:40.808207 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.809153 master-0 kubenswrapper[4409]: I1203 14:26:40.808319 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-config\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.809666 master-0 kubenswrapper[4409]: I1203 14:26:40.809254 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/690d1f81-7b1f-4fd0-9b6e-154c9687c744-config\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.809666 master-0 kubenswrapper[4409]: I1203 14:26:40.809365 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e3675c78-1902-4b92-8a93-cf2dc316f060-cert\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:40.810066 master-0 kubenswrapper[4409]: I1203 14:26:40.809801 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a557d1-cdd9-47ff-afbd-a301e7f589a7-config\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.810558 master-0 kubenswrapper[4409]: I1203 14:26:40.810344 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-serving-cert\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:40.810558 master-0 kubenswrapper[4409]: I1203 14:26:40.810351 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-serving-cert\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.810558 master-0 kubenswrapper[4409]: I1203 14:26:40.810344 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:40.810558 master-0 kubenswrapper[4409]: I1203 14:26:40.810506 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.811880 master-0 kubenswrapper[4409]: I1203 14:26:40.811138 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52100521-67e9-40c9-887c-eda6560f06e0-serving-cert\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.811880 master-0 kubenswrapper[4409]: I1203 14:26:40.811577 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-client\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.812513 master-0 kubenswrapper[4409]: I1203 14:26:40.811957 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/829d285f-d532-45e4-b1ec-54adbc21b9f9-serving-certs-ca-bundle\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.814093 master-0 kubenswrapper[4409]: I1203 14:26:40.812803 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-audit-policies\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.814093 master-0 kubenswrapper[4409]: I1203 14:26:40.812825 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.814093 master-0 kubenswrapper[4409]: I1203 14:26:40.813273 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df2889c-99f7-402a-9d50-18ccf427179c-proxy-tls\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:40.814093 master-0 kubenswrapper[4409]: I1203 14:26:40.813736 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.814093 master-0 kubenswrapper[4409]: I1203 14:26:40.813851 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-audit\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814132 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4df2889c-99f7-402a-9d50-18ccf427179c-images\") pod \"machine-config-operator-664c9d94c9-9vfr4\" (UID: \"4df2889c-99f7-402a-9d50-18ccf427179c\") " pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814156 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814213 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-service-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814338 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e9f484c1-1564-49c7-a43d-bd8b971cea20-images\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814348 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e89bc996-818b-46b9-ad39-a12457acd4bb-serving-cert\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.814531 master-0 kubenswrapper[4409]: I1203 14:26:40.814400 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-serving-cert\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.814784 master-0 kubenswrapper[4409]: I1203 14:26:40.814617 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-etcd-serving-ca\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.814784 master-0 kubenswrapper[4409]: I1203 14:26:40.814779 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bcc78129-4a81-410e-9a42-b12043b5a75a-trusted-ca\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:40.814897 master-0 kubenswrapper[4409]: I1203 14:26:40.814852 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06d774e5-314a-49df-bdca-8e780c9af25a-config\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:40.814956 master-0 kubenswrapper[4409]: I1203 14:26:40.814936 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a557d1-cdd9-47ff-afbd-a301e7f589a7-serving-cert\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:40.815023 master-0 kubenswrapper[4409]: I1203 14:26:40.814957 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a969ddd4-e20d-4dd2-84f4-a140bac65df0-encryption-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.815109 master-0 kubenswrapper[4409]: I1203 14:26:40.815089 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a8dc6511-7339-4269-9d43-14ce53bb4e7f-trusted-ca\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.815403 master-0 kubenswrapper[4409]: I1203 14:26:40.815272 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52100521-67e9-40c9-887c-eda6560f06e0-etcd-ca\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:40.815403 master-0 kubenswrapper[4409]: I1203 14:26:40.815358 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cco-trusted-ca\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:40.815403 master-0 kubenswrapper[4409]: I1203 14:26:40.815384 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-router-certs\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.815538 master-0 kubenswrapper[4409]: I1203 14:26:40.815497 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0535e784-8e28-4090-aa2e-df937910767c-trusted-ca-bundle\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.815538 master-0 kubenswrapper[4409]: I1203 14:26:40.815504 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44af6af5-cecb-4dc4-b793-e8e350f8a47d-trusted-ca\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:40.815632 master-0 kubenswrapper[4409]: I1203 14:26:40.815524 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.815692 master-0 kubenswrapper[4409]: I1203 14:26:40.815665 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7663a25e-236d-4b1d-83ce-733ab146dee3-auth-proxy-config\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:40.815769 master-0 kubenswrapper[4409]: I1203 14:26:40.815734 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04e9e2a5-cdc2-42af-ab2c-49525390be6d-trusted-ca\") pod \"cluster-node-tuning-operator-bbd9b9dff-rrfsm\" (UID: \"04e9e2a5-cdc2-42af-ab2c-49525390be6d\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:40.815930 master-0 kubenswrapper[4409]: I1203 14:26:40.815907 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-config\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.816073 master-0 kubenswrapper[4409]: I1203 14:26:40.815977 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa169e84-880b-4e6d-aeee-7ebfa1f613d2-prometheus-operator-tls\") pod \"prometheus-operator-565bdcb8-477pk\" (UID: \"aa169e84-880b-4e6d-aeee-7ebfa1f613d2\") " pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:40.816073 master-0 kubenswrapper[4409]: I1203 14:26:40.815970 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adbcce01-7282-4a75-843a-9623060346f0-config\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:40.816184 master-0 kubenswrapper[4409]: I1203 14:26:40.816152 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-error\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.816231 master-0 kubenswrapper[4409]: I1203 14:26:40.816156 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d838c1a-22e2-4096-9739-7841ef7d06ba-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.816231 master-0 kubenswrapper[4409]: I1203 14:26:40.816165 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c562495-1290-4792-b4b2-639faa594ae2-config\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:40.816315 master-0 kubenswrapper[4409]: I1203 14:26:40.816243 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.816422 master-0 kubenswrapper[4409]: I1203 14:26:40.816388 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-config\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.816489 master-0 kubenswrapper[4409]: I1203 14:26:40.816473 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/faa79e15-1875-4865-b5e0-aecd4c447bad-package-server-manager-serving-cert\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:40.816541 master-0 kubenswrapper[4409]: I1203 14:26:40.816484 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/24dfafc9-86a9-450e-ac62-a871138106c0-encryption-config\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:40.816682 master-0 kubenswrapper[4409]: I1203 14:26:40.816658 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.816747 master-0 kubenswrapper[4409]: I1203 14:26:40.816704 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/b02244d0-f4ef-4702-950d-9e3fb5ced128-monitoring-plugin-cert\") pod \"monitoring-plugin-547cc9cc49-kqs4k\" (UID: \"b02244d0-f4ef-4702-950d-9e3fb5ced128\") " pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:40.816837 master-0 kubenswrapper[4409]: I1203 14:26:40.816793 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.817045 master-0 kubenswrapper[4409]: I1203 14:26:40.816963 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.817119 master-0 kubenswrapper[4409]: I1203 14:26:40.817070 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-system-session\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.817372 master-0 kubenswrapper[4409]: I1203 14:26:40.817336 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/09b7b0c6-47cc-4860-8c78-9583bb5b0a6e-secret-metrics-client-certs\") pod \"metrics-server-555496955b-vpcbs\" (UID: \"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e\") " pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.817462 master-0 kubenswrapper[4409]: I1203 14:26:40.817343 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5d838c1a-22e2-4096-9739-7841ef7d06ba-tls-assets\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.817647 master-0 kubenswrapper[4409]: I1203 14:26:40.817592 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-trusted-ca\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:40.818040 master-0 kubenswrapper[4409]: I1203 14:26:40.817988 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.818523 master-0 kubenswrapper[4409]: I1203 14:26:40.818475 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adbcce01-7282-4a75-843a-9623060346f0-serving-cert\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:40.818920 master-0 kubenswrapper[4409]: I1203 14:26:40.818880 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/918ff36b-662f-46ae-b71a-301df7e67735-serving-cert\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:40.820303 master-0 kubenswrapper[4409]: I1203 14:26:40.820255 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/44af6af5-cecb-4dc4-b793-e8e350f8a47d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-65dc4bcb88-96zcz\" (UID: \"44af6af5-cecb-4dc4-b793-e8e350f8a47d\") " pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:40.820303 master-0 kubenswrapper[4409]: I1203 14:26:40.820278 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bcc78129-4a81-410e-9a42-b12043b5a75a-metrics-tls\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:40.820581 master-0 kubenswrapper[4409]: I1203 14:26:40.820533 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0535e784-8e28-4090-aa2e-df937910767c-serving-cert\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:40.821136 master-0 kubenswrapper[4409]: I1203 14:26:40.821107 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8a12409a-0be3-4023-9df3-a0f091aac8dc-secret-grpc-tls\") pod \"thanos-querier-cc996c4bd-j4hzr\" (UID: \"8a12409a-0be3-4023-9df3-a0f091aac8dc\") " pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:40.821366 master-0 kubenswrapper[4409]: I1203 14:26:40.821337 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:40.821478 master-0 kubenswrapper[4409]: I1203 14:26:40.821447 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/56649bd4-ac30-4a70-8024-772294fede88-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.821757 master-0 kubenswrapper[4409]: I1203 14:26:40.821721 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b3eef3ef-f954-4e47-92b4-0155bc27332d-profile-collector-cert\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:40.821814 master-0 kubenswrapper[4409]: I1203 14:26:40.821763 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8dc6511-7339-4269-9d43-14ce53bb4e7f-serving-cert\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:40.821946 master-0 kubenswrapper[4409]: I1203 14:26:40.821900 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.822332 master-0 kubenswrapper[4409]: I1203 14:26:40.822280 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/eefee934-ac6b-44e3-a6be-1ae62362ab4f-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:40.822332 master-0 kubenswrapper[4409]: I1203 14:26:40.822321 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-apiservice-cert\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:40.822707 master-0 kubenswrapper[4409]: I1203 14:26:40.822673 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9e0a2889-39a5-471e-bd46-958e2f8eacaa-tls-certificates\") pod \"prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6\" (UID: \"9e0a2889-39a5-471e-bd46-958e2f8eacaa\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:40.822773 master-0 kubenswrapper[4409]: I1203 14:26:40.822755 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.823111 master-0 kubenswrapper[4409]: I1203 14:26:40.823053 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-telemeter-client-tls\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.828097 master-0 kubenswrapper[4409]: I1203 14:26:40.828060 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:26:40.839162 master-0 kubenswrapper[4409]: I1203 14:26:40.839093 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" Dec 03 14:26:40.870912 master-0 kubenswrapper[4409]: I1203 14:26:40.870839 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" Dec 03 14:26:40.883641 master-0 kubenswrapper[4409]: I1203 14:26:40.883561 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:26:40.889638 master-0 kubenswrapper[4409]: I1203 14:26:40.889526 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:40.889822 master-0 kubenswrapper[4409]: I1203 14:26:40.889763 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.889822 master-0 kubenswrapper[4409]: I1203 14:26:40.889814 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.889900 master-0 kubenswrapper[4409]: I1203 14:26:40.889836 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.889900 master-0 kubenswrapper[4409]: I1203 14:26:40.889858 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.889900 master-0 kubenswrapper[4409]: I1203 14:26:40.889875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.889900 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.889919 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.889936 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.889984 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.890014 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.890047 master-0 kubenswrapper[4409]: I1203 14:26:40.890037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890057 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890078 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890097 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890132 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890150 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890167 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890195 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890218 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.890291 master-0 kubenswrapper[4409]: I1203 14:26:40.890284 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:40.891753 master-0 kubenswrapper[4409]: I1203 14:26:40.891649 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e89bc996-818b-46b9-ad39-a12457acd4bb-client-ca\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:40.893750 master-0 kubenswrapper[4409]: I1203 14:26:40.893671 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-config-volume\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.893750 master-0 kubenswrapper[4409]: I1203 14:26:40.893707 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/918ff36b-662f-46ae-b71a-301df7e67735-config\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:40.894109 master-0 kubenswrapper[4409]: I1203 14:26:40.894064 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6dpf\" (UniqueName: \"kubernetes.io/projected/f8c6a484-5f0e-4abc-bc48-934ad0ffde0a-kube-api-access-p6dpf\") pod \"network-check-source-6964bb78b7-g4lv2\" (UID: \"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a\") " pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:40.894179 master-0 kubenswrapper[4409]: I1203 14:26:40.894119 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56649bd4-ac30-4a70-8024-772294fede88-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.894276 master-0 kubenswrapper[4409]: I1203 14:26:40.894244 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:40.894488 master-0 kubenswrapper[4409]: I1203 14:26:40.894333 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a969ddd4-e20d-4dd2-84f4-a140bac65df0-trusted-ca-bundle\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:40.894622 master-0 kubenswrapper[4409]: I1203 14:26:40.894478 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95705e3-17ef-40fe-89e8-22586a32621b-trusted-ca-bundle\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:40.894843 master-0 kubenswrapper[4409]: I1203 14:26:40.894753 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:40.894843 master-0 kubenswrapper[4409]: I1203 14:26:40.894781 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-web-config\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.896923 master-0 kubenswrapper[4409]: I1203 14:26:40.895317 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4669137a-fbc4-41e1-8eeb-5f06b9da2641-config-volume\") pod \"dns-default-5m4f8\" (UID: \"4669137a-fbc4-41e1-8eeb-5f06b9da2641\") " pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:40.896923 master-0 kubenswrapper[4409]: I1203 14:26:40.896662 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/690d1f81-7b1f-4fd0-9b6e-154c9687c744-cert\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:40.896923 master-0 kubenswrapper[4409]: I1203 14:26:40.896806 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/55351b08-d46d-4327-aa5e-ae17fdffdfb5-marketplace-operator-metrics\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:40.896923 master-0 kubenswrapper[4409]: I1203 14:26:40.896891 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8eee1d96-2f58-41a6-ae51-c158b29fc813-kube-state-metrics-tls\") pod \"kube-state-metrics-7dcc7f9bd6-68wml\" (UID: \"8eee1d96-2f58-41a6-ae51-c158b29fc813\") " pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:40.897704 master-0 kubenswrapper[4409]: I1203 14:26:40.897649 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/56649bd4-ac30-4a70-8024-772294fede88-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"56649bd4-ac30-4a70-8024-772294fede88\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:40.897759 master-0 kubenswrapper[4409]: I1203 14:26:40.897705 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/829d285f-d532-45e4-b1ec-54adbc21b9f9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-764cbf5554-kftwv\" (UID: \"829d285f-d532-45e4-b1ec-54adbc21b9f9\") " pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:40.898038 master-0 kubenswrapper[4409]: I1203 14:26:40.897959 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/74e39dce-29d5-4b2a-ab19-386b6cdae94d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-57cbc648f8-q4cgg\" (UID: \"74e39dce-29d5-4b2a-ab19-386b6cdae94d\") " pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:40.898596 master-0 kubenswrapper[4409]: I1203 14:26:40.898475 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/5d838c1a-22e2-4096-9739-7841ef7d06ba-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"5d838c1a-22e2-4096-9739-7841ef7d06ba\") " pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:40.898803 master-0 kubenswrapper[4409]: I1203 14:26:40.898676 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-ca-certs\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:40.899206 master-0 kubenswrapper[4409]: I1203 14:26:40.899151 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-69cc794c58-mfjk2\" (UID: \"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d\") " pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:40.901025 master-0 kubenswrapper[4409]: I1203 14:26:40.900961 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3c1ebb9-f052-410b-a999-45e9b75b0e58-metrics-certs\") pod \"network-metrics-daemon-ch7xd\" (UID: \"b3c1ebb9-f052-410b-a999-45e9b75b0e58\") " pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:40.923393 master-0 kubenswrapper[4409]: I1203 14:26:40.916795 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:40.923393 master-0 kubenswrapper[4409]: I1203 14:26:40.919074 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" Dec 03 14:26:40.931150 master-0 kubenswrapper[4409]: I1203 14:26:40.931084 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:40.931150 master-0 kubenswrapper[4409]: I1203 14:26:40.931113 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:40.937954 master-0 kubenswrapper[4409]: I1203 14:26:40.937862 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" Dec 03 14:26:40.946074 master-0 kubenswrapper[4409]: I1203 14:26:40.944317 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" Dec 03 14:26:40.996860 master-0 kubenswrapper[4409]: I1203 14:26:40.996560 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ch7xd" Dec 03 14:26:41.023867 master-0 kubenswrapper[4409]: I1203 14:26:41.023436 4409 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Dec 03 14:26:41.044263 master-0 kubenswrapper[4409]: I1203 14:26:41.044209 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" Dec 03 14:26:41.044754 master-0 kubenswrapper[4409]: I1203 14:26:41.044710 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" Dec 03 14:26:41.067829 master-0 kubenswrapper[4409]: W1203 14:26:41.067764 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09b7b0c6_47cc_4860_8c78_9583bb5b0a6e.slice/crio-27b0cba6c64d034a98d14016fdedebef42d42b75a6b40179ac4fcced1720044c WatchSource:0}: Error finding container 27b0cba6c64d034a98d14016fdedebef42d42b75a6b40179ac4fcced1720044c: Status 404 returned error can't find the container with id 27b0cba6c64d034a98d14016fdedebef42d42b75a6b40179ac4fcced1720044c Dec 03 14:26:41.086793 master-0 kubenswrapper[4409]: I1203 14:26:41.086741 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:41.087239 master-0 kubenswrapper[4409]: I1203 14:26:41.087219 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 03 14:26:41.093855 master-0 kubenswrapper[4409]: I1203 14:26:41.093798 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:41.093855 master-0 kubenswrapper[4409]: I1203 14:26:41.093853 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.093889 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.093915 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.093943 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.093975 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.094020 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.094057 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:41.094103 master-0 kubenswrapper[4409]: I1203 14:26:41.094085 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094117 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094166 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094192 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094218 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094258 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094289 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094314 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:41.094346 master-0 kubenswrapper[4409]: I1203 14:26:41.094340 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094367 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094394 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094424 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094455 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094484 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094512 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094543 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094576 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094602 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094633 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094656 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094722 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:41.094990 master-0 kubenswrapper[4409]: I1203 14:26:41.094809 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:41.102926 master-0 kubenswrapper[4409]: I1203 14:26:41.102872 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28n2f\" (UniqueName: \"kubernetes.io/projected/e3675c78-1902-4b92-8a93-cf2dc316f060-kube-api-access-28n2f\") pod \"ingress-canary-vkpv4\" (UID: \"e3675c78-1902-4b92-8a93-cf2dc316f060\") " pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:41.103232 master-0 kubenswrapper[4409]: I1203 14:26:41.103103 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nch\" (UniqueName: \"kubernetes.io/projected/6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d-kube-api-access-c5nch\") pod \"downloads-6f5db8559b-96ljh\" (UID: \"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d\") " pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:41.103232 master-0 kubenswrapper[4409]: I1203 14:26:41.103102 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06d774e5-314a-49df-bdca-8e780c9af25a-kube-api-access\") pod \"kube-apiserver-operator-5b557b5f57-s5s96\" (UID: \"06d774e5-314a-49df-bdca-8e780c9af25a\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:41.103344 master-0 kubenswrapper[4409]: I1203 14:26:41.103259 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdn2\" (UniqueName: \"kubernetes.io/projected/b3eef3ef-f954-4e47-92b4-0155bc27332d-kube-api-access-lfdn2\") pod \"olm-operator-76bd5d69c7-fjrrg\" (UID: \"b3eef3ef-f954-4e47-92b4-0155bc27332d\") " pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:41.103809 master-0 kubenswrapper[4409]: I1203 14:26:41.103507 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/c95705e3-17ef-40fe-89e8-22586a32621b-kube-api-access-zhc87\") pod \"insights-operator-59d99f9b7b-74sss\" (UID: \"c95705e3-17ef-40fe-89e8-22586a32621b\") " pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:41.104144 master-0 kubenswrapper[4409]: I1203 14:26:41.104103 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgq6z\" (UniqueName: \"kubernetes.io/projected/52100521-67e9-40c9-887c-eda6560f06e0-kube-api-access-cgq6z\") pod \"etcd-operator-7978bf889c-n64v4\" (UID: \"52100521-67e9-40c9-887c-eda6560f06e0\") " pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:41.104221 master-0 kubenswrapper[4409]: I1203 14:26:41.104180 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6pb\" (UniqueName: \"kubernetes.io/projected/918ff36b-662f-46ae-b71a-301df7e67735-kube-api-access-rb6pb\") pod \"kube-storage-version-migrator-operator-67c4cff67d-q2lxz\" (UID: \"918ff36b-662f-46ae-b71a-301df7e67735\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.104939 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czfkv\" (UniqueName: \"kubernetes.io/projected/0535e784-8e28-4090-aa2e-df937910767c-kube-api-access-czfkv\") pod \"authentication-operator-7479ffdf48-hpdzl\" (UID: \"0535e784-8e28-4090-aa2e-df937910767c\") " pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.105115 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbzpz\" (UniqueName: \"kubernetes.io/projected/a969ddd4-e20d-4dd2-84f4-a140bac65df0-kube-api-access-cbzpz\") pod \"apiserver-6985f84b49-v9vlg\" (UID: \"a969ddd4-e20d-4dd2-84f4-a140bac65df0\") " pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.105258 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt87\" (UniqueName: \"kubernetes.io/projected/55351b08-d46d-4327-aa5e-ae17fdffdfb5-kube-api-access-nxt87\") pod \"marketplace-operator-7d67745bb7-dwcxb\" (UID: \"55351b08-d46d-4327-aa5e-ae17fdffdfb5\") " pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.105263 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqkdr\" (UniqueName: \"kubernetes.io/projected/63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4-kube-api-access-wqkdr\") pod \"csi-snapshot-controller-86897dd478-qqwh7\" (UID: \"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.105292 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n798x\" (UniqueName: \"kubernetes.io/projected/e89bc996-818b-46b9-ad39-a12457acd4bb-kube-api-access-n798x\") pod \"controller-manager-7d7ddcf759-pvkrm\" (UID: \"e89bc996-818b-46b9-ad39-a12457acd4bb\") " pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:41.105345 master-0 kubenswrapper[4409]: I1203 14:26:41.105269 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlgx\" (UniqueName: \"kubernetes.io/projected/36da3c2f-860c-4188-a7d7-5b615981a835-kube-api-access-jzlgx\") pod \"service-ca-6b8bb995f7-b68p8\" (UID: \"36da3c2f-860c-4188-a7d7-5b615981a835\") " pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:41.105650 master-0 kubenswrapper[4409]: I1203 14:26:41.105379 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" Dec 03 14:26:41.109988 master-0 kubenswrapper[4409]: I1203 14:26:41.109533 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vkpv4" Dec 03 14:26:41.109988 master-0 kubenswrapper[4409]: I1203 14:26:41.109762 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7d88\" (UniqueName: \"kubernetes.io/projected/b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab-kube-api-access-v7d88\") pod \"oauth-openshift-747bdb58b5-mn76f\" (UID: \"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab\") " pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:41.109988 master-0 kubenswrapper[4409]: I1203 14:26:41.109781 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q659\" (UniqueName: \"kubernetes.io/projected/faa79e15-1875-4865-b5e0-aecd4c447bad-kube-api-access-7q659\") pod \"package-server-manager-75b4d49d4c-h599p\" (UID: \"faa79e15-1875-4865-b5e0-aecd4c447bad\") " pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:41.109988 master-0 kubenswrapper[4409]: I1203 14:26:41.109789 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8h8\" (UniqueName: \"kubernetes.io/projected/803897bb-580e-4f7a-9be2-583fc607d1f6-kube-api-access-fw8h8\") pod \"cluster-olm-operator-589f5cdc9d-5h2kn\" (UID: \"803897bb-580e-4f7a-9be2-583fc607d1f6\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:41.110703 master-0 kubenswrapper[4409]: I1203 14:26:41.110541 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nj\" (UniqueName: \"kubernetes.io/projected/6b95a5a6-db93-4a58-aaff-3619d130c8cb-kube-api-access-nc9nj\") pod \"cluster-storage-operator-f84784664-ntb9w\" (UID: \"6b95a5a6-db93-4a58-aaff-3619d130c8cb\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:41.110703 master-0 kubenswrapper[4409]: I1203 14:26:41.110604 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mrw\" (UniqueName: \"kubernetes.io/projected/a8dc6511-7339-4269-9d43-14ce53bb4e7f-kube-api-access-p5mrw\") pod \"console-operator-77df56447c-vsrxx\" (UID: \"a8dc6511-7339-4269-9d43-14ce53bb4e7f\") " pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:41.110703 master-0 kubenswrapper[4409]: I1203 14:26:41.110603 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrngd\" (UniqueName: \"kubernetes.io/projected/f1f2d0e1-eaaf-4037-a976-5fc2a942c50c-kube-api-access-nrngd\") pod \"service-ca-operator-56f5898f45-fhnc5\" (UID: \"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c\") " pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:41.111737 master-0 kubenswrapper[4409]: I1203 14:26:41.111445 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8knq\" (UniqueName: \"kubernetes.io/projected/69b752ed-691c-4574-a01e-428d4bf85b75-kube-api-access-t8knq\") pod \"catalogd-controller-manager-754cfd84-qf898\" (UID: \"69b752ed-691c-4574-a01e-428d4bf85b75\") " pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:41.111789 master-0 kubenswrapper[4409]: I1203 14:26:41.111729 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfs27\" (UniqueName: \"kubernetes.io/projected/1c562495-1290-4792-b4b2-639faa594ae2-kube-api-access-tfs27\") pod \"openshift-apiserver-operator-667484ff5-n7qz8\" (UID: \"1c562495-1290-4792-b4b2-639faa594ae2\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:41.112420 master-0 kubenswrapper[4409]: I1203 14:26:41.112056 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkbcq\" (UniqueName: \"kubernetes.io/projected/adbcce01-7282-4a75-843a-9623060346f0-kube-api-access-jkbcq\") pod \"openshift-controller-manager-operator-7c4697b5f5-9f69p\" (UID: \"adbcce01-7282-4a75-843a-9623060346f0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:41.112420 master-0 kubenswrapper[4409]: I1203 14:26:41.112205 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fns8\" (UniqueName: \"kubernetes.io/projected/c180b512-bf0c-4ddc-a5cf-f04acc830a61-kube-api-access-2fns8\") pod \"csi-snapshot-controller-operator-7b795784b8-44frm\" (UID: \"c180b512-bf0c-4ddc-a5cf-f04acc830a61\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.112691 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ss6\" (UniqueName: \"kubernetes.io/projected/d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff-kube-api-access-p7ss6\") pod \"packageserver-7c64dd9d8b-49skr\" (UID: \"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff\") " pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.113441 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aa67ace-d03a-4d06-9fb5-24777b65f2cc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5f574c6c79-86bh9\" (UID: \"5aa67ace-d03a-4d06-9fb5-24777b65f2cc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.113646 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbsl\" (UniqueName: \"kubernetes.io/projected/e9f484c1-1564-49c7-a43d-bd8b971cea20-kube-api-access-rjbsl\") pod \"machine-api-operator-7486ff55f-wcnxg\" (UID: \"e9f484c1-1564-49c7-a43d-bd8b971cea20\") " pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.114464 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwtx\" (UniqueName: \"kubernetes.io/projected/614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1-kube-api-access-ncwtx\") pod \"redhat-marketplace-ddwmn\" (UID: \"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1\") " pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.114973 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4f8\" (UniqueName: \"kubernetes.io/projected/0b4c4f1f-d61e-483e-8c0b-6e2774437e4d-kube-api-access-pj4f8\") pod \"openshift-config-operator-68c95b6cf5-fmdmz\" (UID: \"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d\") " pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.115468 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"console-6c9c84854-xf7nv\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:41.115810 master-0 kubenswrapper[4409]: I1203 14:26:41.115624 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" Dec 03 14:26:41.116477 master-0 kubenswrapper[4409]: I1203 14:26:41.116440 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn5h6\" (UniqueName: \"kubernetes.io/projected/eefee934-ac6b-44e3-a6be-1ae62362ab4f-kube-api-access-jn5h6\") pod \"cloud-credential-operator-7c4dc67499-tjwg8\" (UID: \"eefee934-ac6b-44e3-a6be-1ae62362ab4f\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:41.118209 master-0 kubenswrapper[4409]: I1203 14:26:41.117383 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" Dec 03 14:26:41.123134 master-0 kubenswrapper[4409]: I1203 14:26:41.123035 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b051ae27-7879-448d-b426-4dce76e29739-kube-api-access\") pod \"kube-controller-manager-operator-b5dddf8f5-kwb74\" (UID: \"b051ae27-7879-448d-b426-4dce76e29739\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:41.125665 master-0 kubenswrapper[4409]: I1203 14:26:41.125623 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" Dec 03 14:26:41.135075 master-0 kubenswrapper[4409]: I1203 14:26:41.133691 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:41.139791 master-0 kubenswrapper[4409]: I1203 14:26:41.135894 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" Dec 03 14:26:41.145631 master-0 kubenswrapper[4409]: I1203 14:26:41.141663 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:41.145631 master-0 kubenswrapper[4409]: I1203 14:26:41.143829 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" Dec 03 14:26:41.168238 master-0 kubenswrapper[4409]: I1203 14:26:41.146532 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:41.168238 master-0 kubenswrapper[4409]: I1203 14:26:41.151775 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" Dec 03 14:26:41.168238 master-0 kubenswrapper[4409]: I1203 14:26:41.158773 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:41.168238 master-0 kubenswrapper[4409]: I1203 14:26:41.167043 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" Dec 03 14:26:41.184984 master-0 kubenswrapper[4409]: I1203 14:26:41.177488 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" Dec 03 14:26:41.184984 master-0 kubenswrapper[4409]: I1203 14:26:41.184441 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" Dec 03 14:26:41.196186 master-0 kubenswrapper[4409]: I1203 14:26:41.195754 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" Dec 03 14:26:41.205279 master-0 kubenswrapper[4409]: I1203 14:26:41.202333 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:41.205279 master-0 kubenswrapper[4409]: I1203 14:26:41.204031 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" Dec 03 14:26:41.271917 master-0 kubenswrapper[4409]: I1203 14:26:41.271763 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" Dec 03 14:26:41.272497 master-0 kubenswrapper[4409]: I1203 14:26:41.272446 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" Dec 03 14:26:41.297362 master-0 kubenswrapper[4409]: I1203 14:26:41.294289 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:41.297362 master-0 kubenswrapper[4409]: I1203 14:26:41.295627 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" Dec 03 14:26:41.314725 master-0 kubenswrapper[4409]: I1203 14:26:41.311323 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" Dec 03 14:26:41.328184 master-0 kubenswrapper[4409]: I1203 14:26:41.326589 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:41.328184 master-0 kubenswrapper[4409]: I1203 14:26:41.327552 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:41.361812 master-0 kubenswrapper[4409]: I1203 14:26:41.358507 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:41.361812 master-0 kubenswrapper[4409]: I1203 14:26:41.359440 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:41.361812 master-0 kubenswrapper[4409]: I1203 14:26:41.359733 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:41.361812 master-0 kubenswrapper[4409]: I1203 14:26:41.359971 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" Dec 03 14:26:41.361812 master-0 kubenswrapper[4409]: I1203 14:26:41.360301 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" Dec 03 14:26:41.368413 master-0 kubenswrapper[4409]: I1203 14:26:41.367707 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:26:41.369300 master-0 kubenswrapper[4409]: I1203 14:26:41.369260 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" Dec 03 14:26:41.372948 master-0 kubenswrapper[4409]: I1203 14:26:41.372912 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" Dec 03 14:26:41.390712 master-0 kubenswrapper[4409]: I1203 14:26:41.390664 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:41.393530 master-0 kubenswrapper[4409]: I1203 14:26:41.393402 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" Dec 03 14:26:41.395560 master-0 kubenswrapper[4409]: W1203 14:26:41.395453 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38888547_ed48_4f96_810d_bcd04e49bd6b.slice/crio-8bad6fc68d181f8eea3007b7d8af3b8a5fd80fc3a0d559583523f7c0f1d226b8 WatchSource:0}: Error finding container 8bad6fc68d181f8eea3007b7d8af3b8a5fd80fc3a0d559583523f7c0f1d226b8: Status 404 returned error can't find the container with id 8bad6fc68d181f8eea3007b7d8af3b8a5fd80fc3a0d559583523f7c0f1d226b8 Dec 03 14:26:41.400312 master-0 kubenswrapper[4409]: I1203 14:26:41.400257 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:41.405952 master-0 kubenswrapper[4409]: W1203 14:26:41.405580 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44af6af5_cecb_4dc4_b793_e8e350f8a47d.slice/crio-b7ef9cd8f87f75f737158d8ad9b2dfd45e59749ed9b5bd6a3867dc18221e3321 WatchSource:0}: Error finding container b7ef9cd8f87f75f737158d8ad9b2dfd45e59749ed9b5bd6a3867dc18221e3321: Status 404 returned error can't find the container with id b7ef9cd8f87f75f737158d8ad9b2dfd45e59749ed9b5bd6a3867dc18221e3321 Dec 03 14:26:41.488892 master-0 kubenswrapper[4409]: I1203 14:26:41.486607 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:41.488892 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:41.488892 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:41.488892 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:41.488892 master-0 kubenswrapper[4409]: I1203 14:26:41.486674 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:41.497209 master-0 kubenswrapper[4409]: I1203 14:26:41.495491 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"8077ed67e0dd464d9cf38f119f42059101ccb3bf98bd9a6a809b1f7acedbda4b"} Dec 03 14:26:41.497209 master-0 kubenswrapper[4409]: I1203 14:26:41.495541 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" event={"ID":"09b7b0c6-47cc-4860-8c78-9583bb5b0a6e","Type":"ContainerStarted","Data":"27b0cba6c64d034a98d14016fdedebef42d42b75a6b40179ac4fcced1720044c"} Dec 03 14:26:41.506390 master-0 kubenswrapper[4409]: I1203 14:26:41.505907 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"7141322276e82e40490c5b402d6b691697894d554fe95fb1a5784fb0b728b41b"} Dec 03 14:26:41.511495 master-0 kubenswrapper[4409]: I1203 14:26:41.511442 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"8bad6fc68d181f8eea3007b7d8af3b8a5fd80fc3a0d559583523f7c0f1d226b8"} Dec 03 14:26:41.517307 master-0 kubenswrapper[4409]: I1203 14:26:41.517175 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"b7ef9cd8f87f75f737158d8ad9b2dfd45e59749ed9b5bd6a3867dc18221e3321"} Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815513 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815564 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815619 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815642 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815667 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815687 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815705 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815727 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815743 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815766 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815786 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815832 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815854 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815901 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:41.816139 master-0 kubenswrapper[4409]: I1203 14:26:41.815921 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:41.824875 master-0 kubenswrapper[4409]: I1203 14:26:41.824805 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwck4\" (UniqueName: \"kubernetes.io/projected/82bd0ae5-b35d-47c8-b693-b27a9a56476d-kube-api-access-bwck4\") pod \"operator-controller-controller-manager-5f78c89466-bshxw\" (UID: \"82bd0ae5-b35d-47c8-b693-b27a9a56476d\") " pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:41.826878 master-0 kubenswrapper[4409]: I1203 14:26:41.826274 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v429m\" (UniqueName: \"kubernetes.io/projected/6d38d102-4efe-4ed3-ae23-b1e295cdaccd-kube-api-access-v429m\") pod \"network-check-target-pcchm\" (UID: \"6d38d102-4efe-4ed3-ae23-b1e295cdaccd\") " pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:41.826878 master-0 kubenswrapper[4409]: I1203 14:26:41.826764 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcqxx\" (UniqueName: \"kubernetes.io/projected/bff18a80-0b0f-40ab-862e-e8b1ab32040a-kube-api-access-zcqxx\") pod \"community-operators-7fwtv\" (UID: \"bff18a80-0b0f-40ab-862e-e8b1ab32040a\") " pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:41.827100 master-0 kubenswrapper[4409]: I1203 14:26:41.826982 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:41.827181 master-0 kubenswrapper[4409]: I1203 14:26:41.826990 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92p99\" (UniqueName: \"kubernetes.io/projected/b340553b-d483-4839-8328-518f27770832-kube-api-access-92p99\") pod \"cluster-samples-operator-6d64b47964-jjd7h\" (UID: \"b340553b-d483-4839-8328-518f27770832\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:41.827229 master-0 kubenswrapper[4409]: I1203 14:26:41.827160 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/bcc78129-4a81-410e-9a42-b12043b5a75a-kube-api-access-x22gr\") pod \"ingress-operator-85dbd94574-8jfp5\" (UID: \"bcc78129-4a81-410e-9a42-b12043b5a75a\") " pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:41.828762 master-0 kubenswrapper[4409]: I1203 14:26:41.828717 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wh8g\" (UniqueName: \"kubernetes.io/projected/690d1f81-7b1f-4fd0-9b6e-154c9687c744-kube-api-access-8wh8g\") pod \"cluster-baremetal-operator-5fdc576499-j2n8j\" (UID: \"690d1f81-7b1f-4fd0-9b6e-154c9687c744\") " pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:41.829457 master-0 kubenswrapper[4409]: I1203 14:26:41.829413 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m789m\" (UniqueName: \"kubernetes.io/projected/24dfafc9-86a9-450e-ac62-a871138106c0-kube-api-access-m789m\") pod \"apiserver-57fd58bc7b-kktql\" (UID: \"24dfafc9-86a9-450e-ac62-a871138106c0\") " pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:41.831919 master-0 kubenswrapper[4409]: I1203 14:26:41.831866 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwv7s\" (UniqueName: \"kubernetes.io/projected/6f723d97-5c65-4ae7-9085-26db8b4f2f52-kube-api-access-wwv7s\") pod \"migrator-5bcf58cf9c-dvklg\" (UID: \"6f723d97-5c65-4ae7-9085-26db8b4f2f52\") " pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:41.832370 master-0 kubenswrapper[4409]: I1203 14:26:41.832318 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"installer-6-master-0\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:41.836633 master-0 kubenswrapper[4409]: I1203 14:26:41.836094 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cnd5\" (UniqueName: \"kubernetes.io/projected/a5b3c1fb-6f81-4067-98da-681d6c7c33e4-kube-api-access-9cnd5\") pod \"catalog-operator-7cf5cf757f-zgm6l\" (UID: \"a5b3c1fb-6f81-4067-98da-681d6c7c33e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:41.836633 master-0 kubenswrapper[4409]: I1203 14:26:41.836550 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:41.839457 master-0 kubenswrapper[4409]: I1203 14:26:41.839422 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxkd\" (UniqueName: \"kubernetes.io/projected/98392f8e-0285-4bc3-95a9-d29033639ca3-kube-api-access-djxkd\") pod \"dns-operator-6b7bcd6566-jh9m8\" (UID: \"98392f8e-0285-4bc3-95a9-d29033639ca3\") " pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:41.856647 master-0 kubenswrapper[4409]: I1203 14:26:41.856551 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:26:41.856647 master-0 kubenswrapper[4409]: I1203 14:26:41.856558 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:26:41.860445 master-0 kubenswrapper[4409]: I1203 14:26:41.860405 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" Dec 03 14:26:41.867885 master-0 kubenswrapper[4409]: I1203 14:26:41.867834 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:41.877353 master-0 kubenswrapper[4409]: I1203 14:26:41.877306 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" Dec 03 14:26:41.884567 master-0 kubenswrapper[4409]: I1203 14:26:41.884397 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:26:41.887578 master-0 kubenswrapper[4409]: I1203 14:26:41.887076 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:41.925894 master-0 kubenswrapper[4409]: I1203 14:26:41.920382 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" Dec 03 14:26:41.990070 master-0 kubenswrapper[4409]: I1203 14:26:41.982873 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk6r\" (UniqueName: \"kubernetes.io/projected/ab40dfa2-d8f8-4300-8a10-5aa73e1d6294-kube-api-access-5mk6r\") pod \"control-plane-machine-set-operator-66f4cc99d4-x278n\" (UID: \"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294\") " pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:41.990070 master-0 kubenswrapper[4409]: I1203 14:26:41.986341 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn7fm\" (UniqueName: \"kubernetes.io/projected/a192c38a-4bfa-40fe-9a2d-d48260cf6443-kube-api-access-fn7fm\") pod \"certified-operators-t8rt7\" (UID: \"a192c38a-4bfa-40fe-9a2d-d48260cf6443\") " pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:41.990070 master-0 kubenswrapper[4409]: I1203 14:26:41.986749 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:41.991466 master-0 kubenswrapper[4409]: I1203 14:26:41.991372 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqvl\" (UniqueName: \"kubernetes.io/projected/33a557d1-cdd9-47ff-afbd-a301e7f589a7-kube-api-access-dmqvl\") pod \"route-controller-manager-74cff6cf84-bh8rz\" (UID: \"33a557d1-cdd9-47ff-afbd-a301e7f589a7\") " pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:41.997135 master-0 kubenswrapper[4409]: I1203 14:26:41.996897 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhf9r\" (UniqueName: \"kubernetes.io/projected/911f6333-cdb0-425c-b79b-f892444b7097-kube-api-access-mhf9r\") pod \"redhat-operators-6z4sc\" (UID: \"911f6333-cdb0-425c-b79b-f892444b7097\") " pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:42.018102 master-0 kubenswrapper[4409]: I1203 14:26:41.999443 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" Dec 03 14:26:42.018102 master-0 kubenswrapper[4409]: I1203 14:26:42.009747 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" Dec 03 14:26:42.018102 master-0 kubenswrapper[4409]: I1203 14:26:42.017932 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltsnd\" (UniqueName: \"kubernetes.io/projected/7663a25e-236d-4b1d-83ce-733ab146dee3-kube-api-access-ltsnd\") pod \"cluster-autoscaler-operator-7f88444875-6dk29\" (UID: \"7663a25e-236d-4b1d-83ce-733ab146dee3\") " pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:42.142447 master-0 kubenswrapper[4409]: I1203 14:26:42.141823 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" Dec 03 14:26:42.184047 master-0 kubenswrapper[4409]: I1203 14:26:42.183525 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:42.214665 master-0 kubenswrapper[4409]: I1203 14:26:42.214587 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:26:42.222544 master-0 kubenswrapper[4409]: I1203 14:26:42.222304 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:42.275369 master-0 kubenswrapper[4409]: I1203 14:26:42.275314 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" Dec 03 14:26:42.530875 master-0 kubenswrapper[4409]: I1203 14:26:42.530795 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"f707b5a65c5a5509b5382c713002ab668a4590bc8b8a861cec9c1fbd38881498"} Dec 03 14:26:42.533226 master-0 kubenswrapper[4409]: I1203 14:26:42.532896 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-96zcz" event={"ID":"44af6af5-cecb-4dc4-b793-e8e350f8a47d","Type":"ContainerStarted","Data":"37721ac687d32913bf6bdceb859214fe93d7984f33c985cd6f712e4050af4f50"} Dec 03 14:26:42.537257 master-0 kubenswrapper[4409]: I1203 14:26:42.537194 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"3fcbc3619c0e7f1874a668bc8895d798f0933e05254131ed6a1f76623ae161c0"} Dec 03 14:26:42.660343 master-0 kubenswrapper[4409]: I1203 14:26:42.653351 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:42.660343 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:42.660343 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:42.660343 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:42.660343 master-0 kubenswrapper[4409]: I1203 14:26:42.653859 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:43.176727 master-0 kubenswrapper[4409]: W1203 14:26:43.176293 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e0a2889_39a5_471e_bd46_958e2f8eacaa.slice/crio-318c285367ab74450349077b1374012191ba0291537a09607adae95f24388180 WatchSource:0}: Error finding container 318c285367ab74450349077b1374012191ba0291537a09607adae95f24388180: Status 404 returned error can't find the container with id 318c285367ab74450349077b1374012191ba0291537a09607adae95f24388180 Dec 03 14:26:43.430154 master-0 kubenswrapper[4409]: I1203 14:26:43.426339 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:43.430154 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:43.430154 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:43.430154 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:43.430154 master-0 kubenswrapper[4409]: I1203 14:26:43.426405 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:43.560970 master-0 kubenswrapper[4409]: I1203 14:26:43.558216 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"210695cc272567f2ac50b5b585b055de109997a0015545ba86ef8d6d18bb15a3"} Dec 03 14:26:43.560970 master-0 kubenswrapper[4409]: I1203 14:26:43.560368 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"76c76172c77d4391b7e4160aaf55731be2e09b805b27a40b70a42c68de592261"} Dec 03 14:26:43.560970 master-0 kubenswrapper[4409]: I1203 14:26:43.560423 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" event={"ID":"9e0a2889-39a5-471e-bd46-958e2f8eacaa","Type":"ContainerStarted","Data":"318c285367ab74450349077b1374012191ba0291537a09607adae95f24388180"} Dec 03 14:26:43.576660 master-0 kubenswrapper[4409]: I1203 14:26:43.561082 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:43.576660 master-0 kubenswrapper[4409]: I1203 14:26:43.574355 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" event={"ID":"b1b3ab29-77cf-48ac-8881-846c46bb9048","Type":"ContainerStarted","Data":"085bc1c4ecdd7b03a393bb0bd3b335c18668f456357b06606e0a61c9bc794bb8"} Dec 03 14:26:43.576660 master-0 kubenswrapper[4409]: I1203 14:26:43.574413 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c696657b7-452tx" event={"ID":"b1b3ab29-77cf-48ac-8881-846c46bb9048","Type":"ContainerStarted","Data":"f26b7b14e95226333f79513efea2fbb740824445e4365bdc24ca46613476cf15"} Dec 03 14:26:43.581878 master-0 kubenswrapper[4409]: I1203 14:26:43.580195 4409 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" start-of-body= Dec 03 14:26:43.581878 master-0 kubenswrapper[4409]: I1203 14:26:43.580270 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" podUID="9e0a2889-39a5-471e-bd46-958e2f8eacaa" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.128.0.73:8443/healthz\": dial tcp 10.128.0.73:8443: connect: connection refused" Dec 03 14:26:43.581878 master-0 kubenswrapper[4409]: I1203 14:26:43.581819 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-664c9d94c9-9vfr4" event={"ID":"4df2889c-99f7-402a-9d50-18ccf427179c","Type":"ContainerStarted","Data":"84117577340f7d624bfa59166b738fd9cc6344c0f2e5f99736f9162d7d35146f"} Dec 03 14:26:43.594182 master-0 kubenswrapper[4409]: I1203 14:26:43.594139 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerStarted","Data":"87732635fbd1d41343c81f9da0ba28c1db41ebda9b9ab9b235767cc746eb95c8"} Dec 03 14:26:43.619977 master-0 kubenswrapper[4409]: I1203 14:26:43.613446 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"45e9a85e7f943113821bbdbc1bf92aee8062b53a4a592c0c31a2aaed5d284635"} Dec 03 14:26:43.627623 master-0 kubenswrapper[4409]: I1203 14:26:43.625743 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"ef91be6f84cbda29adbcc26478c518cd0f44fd9da388facfa0644ab9eb9e0f3c"} Dec 03 14:26:43.627623 master-0 kubenswrapper[4409]: I1203 14:26:43.625798 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" event={"ID":"b02244d0-f4ef-4702-950d-9e3fb5ced128","Type":"ContainerStarted","Data":"70e825fbf7791c1be65e2ceb7781c8f3927e6cc478bac2db4b42f6ec5f5e54d6"} Dec 03 14:26:43.662144 master-0 kubenswrapper[4409]: W1203 14:26:43.654291 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36da3c2f_860c_4188_a7d7_5b615981a835.slice/crio-6cb57608ea388e5a1e92e509e973ace6792e0f75d7c7900fc2fbd3c5cfddfe37 WatchSource:0}: Error finding container 6cb57608ea388e5a1e92e509e973ace6792e0f75d7c7900fc2fbd3c5cfddfe37: Status 404 returned error can't find the container with id 6cb57608ea388e5a1e92e509e973ace6792e0f75d7c7900fc2fbd3c5cfddfe37 Dec 03 14:26:43.670836 master-0 kubenswrapper[4409]: W1203 14:26:43.669344 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8a9c244_f0b3_42e8_ae50_7012c4ecc0ff.slice/crio-41632d1c432947f43ec9b4acd271dd160e90584613cd3c70d82735c33613ae88 WatchSource:0}: Error finding container 41632d1c432947f43ec9b4acd271dd160e90584613cd3c70d82735c33613ae88: Status 404 returned error can't find the container with id 41632d1c432947f43ec9b4acd271dd160e90584613cd3c70d82735c33613ae88 Dec 03 14:26:44.144655 master-0 kubenswrapper[4409]: I1203 14:26:44.144067 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Dec 03 14:26:44.157034 master-0 kubenswrapper[4409]: W1203 14:26:44.154919 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9c016f10_6cf2_4409_9365_05ae2e2adc5a.slice/crio-3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d WatchSource:0}: Error finding container 3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d: Status 404 returned error can't find the container with id 3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d Dec 03 14:26:44.161427 master-0 kubenswrapper[4409]: W1203 14:26:44.158779 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadbcce01_7282_4a75_843a_9623060346f0.slice/crio-89855dad3d8d5ab2693bff92f9d580a74e05cc1910d4c94eb7b4ddd932150a17 WatchSource:0}: Error finding container 89855dad3d8d5ab2693bff92f9d580a74e05cc1910d4c94eb7b4ddd932150a17: Status 404 returned error can't find the container with id 89855dad3d8d5ab2693bff92f9d580a74e05cc1910d4c94eb7b4ddd932150a17 Dec 03 14:26:44.418454 master-0 kubenswrapper[4409]: I1203 14:26:44.418400 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:44.418454 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:44.418454 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:44.418454 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:44.418674 master-0 kubenswrapper[4409]: I1203 14:26:44.418476 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:44.669456 master-0 kubenswrapper[4409]: W1203 14:26:44.668956 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeefee934_ac6b_44e3_a6be_1ae62362ab4f.slice/crio-a454ed90167fb03c560e0538847491802736458f6443da3e1e96ab8dab06e323 WatchSource:0}: Error finding container a454ed90167fb03c560e0538847491802736458f6443da3e1e96ab8dab06e323: Status 404 returned error can't find the container with id a454ed90167fb03c560e0538847491802736458f6443da3e1e96ab8dab06e323 Dec 03 14:26:44.673941 master-0 kubenswrapper[4409]: W1203 14:26:44.673803 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda192c38a_4bfa_40fe_9a2d_d48260cf6443.slice/crio-4a31b680223a25dc44071b18bd153a99304e6a44634adb1762417a2bad855419 WatchSource:0}: Error finding container 4a31b680223a25dc44071b18bd153a99304e6a44634adb1762417a2bad855419: Status 404 returned error can't find the container with id 4a31b680223a25dc44071b18bd153a99304e6a44634adb1762417a2bad855419 Dec 03 14:26:44.673941 master-0 kubenswrapper[4409]: I1203 14:26:44.673876 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"da1e962d9c88dcdb50624661a3d536f73d46b79dd058e1a7022b56ff8b62dffa"} Dec 03 14:26:44.673941 master-0 kubenswrapper[4409]: I1203 14:26:44.673938 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"596e4cbeef1a005d7002249c6a2579fc54ba3908554126bec65d316d49ef756a"} Dec 03 14:26:44.684117 master-0 kubenswrapper[4409]: I1203 14:26:44.683953 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"e89d943c403e894673999e9fc7c29e34d548ed192d01706644440a5960556d56"} Dec 03 14:26:44.684117 master-0 kubenswrapper[4409]: I1203 14:26:44.684025 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"f4dbd7383f6eac98dcda7b44df67d2e08d1a93104ef5880a2d8a2236c2c32c0c"} Dec 03 14:26:44.699017 master-0 kubenswrapper[4409]: I1203 14:26:44.697808 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"217b0ab598d44443aa0aa74dba4915bc816cd61734836bf3648989a99b8a2792"} Dec 03 14:26:44.705429 master-0 kubenswrapper[4409]: I1203 14:26:44.703159 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-6964bb78b7-g4lv2" event={"ID":"f8c6a484-5f0e-4abc-bc48-934ad0ffde0a","Type":"ContainerStarted","Data":"4f17cba2c099d8717e710b3e3b6a5c500e469e95ee3328e65b9db8290beb7da9"} Dec 03 14:26:44.710672 master-0 kubenswrapper[4409]: I1203 14:26:44.710602 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"602c3d75eea03cfc975a6cbe722958ab20912457e75da088f7be10d2b6845482"} Dec 03 14:26:44.710782 master-0 kubenswrapper[4409]: I1203 14:26:44.710686 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"97bcb56472bf0f6563bc9ad2cf476b155bedb950dbdca2c15b2908482531c6da"} Dec 03 14:26:44.710782 master-0 kubenswrapper[4409]: I1203 14:26:44.710703 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-74cddd4fb5-phk6r" event={"ID":"8c6fa89f-268c-477b-9f04-238d2305cc89","Type":"ContainerStarted","Data":"0873fcef4c829dd42dd58c2cc8b4dcf069847b2fb581267f7ff64db8133f0ea2"} Dec 03 14:26:44.718777 master-0 kubenswrapper[4409]: I1203 14:26:44.718705 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"7b549b6dd948dade693112cdee062d669d5ba45755f741d6c55819183d11f69d"} Dec 03 14:26:44.724127 master-0 kubenswrapper[4409]: I1203 14:26:44.724061 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"6cf1988df9471cf311ad1e6a443f966181694b6ce65ec81dc70a044ad5dc919d"} Dec 03 14:26:44.727397 master-0 kubenswrapper[4409]: I1203 14:26:44.727347 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"91a39ffdd5ac2ad4295b0895033b4326bb42a346a93431b30d022645050891bd"} Dec 03 14:26:44.727397 master-0 kubenswrapper[4409]: I1203 14:26:44.727407 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-6b8bb995f7-b68p8" event={"ID":"36da3c2f-860c-4188-a7d7-5b615981a835","Type":"ContainerStarted","Data":"6cb57608ea388e5a1e92e509e973ace6792e0f75d7c7900fc2fbd3c5cfddfe37"} Dec 03 14:26:44.732601 master-0 kubenswrapper[4409]: I1203 14:26:44.732368 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"aa8b3795979bf762d3f6b94c441d6cc95e7c597d1bd1b83532ed3eed6ea6e678"} Dec 03 14:26:44.732601 master-0 kubenswrapper[4409]: I1203 14:26:44.732451 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-9f69p" event={"ID":"adbcce01-7282-4a75-843a-9623060346f0","Type":"ContainerStarted","Data":"89855dad3d8d5ab2693bff92f9d580a74e05cc1910d4c94eb7b4ddd932150a17"} Dec 03 14:26:44.734905 master-0 kubenswrapper[4409]: I1203 14:26:44.734729 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"c446353d6cd758d692599d87beaee487925dca4aaaada08db5adafc7b995ff79"} Dec 03 14:26:44.734905 master-0 kubenswrapper[4409]: I1203 14:26:44.734776 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" event={"ID":"d8a9c244-f0b3-42e8-ae50-7012c4ecc0ff","Type":"ContainerStarted","Data":"41632d1c432947f43ec9b4acd271dd160e90584613cd3c70d82735c33613ae88"} Dec 03 14:26:44.741062 master-0 kubenswrapper[4409]: I1203 14:26:44.740582 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"29844e31031ff9aefcc30c4804fbd02aacff79b3b3dfdab913991c744ae55e7e"} Dec 03 14:26:44.746351 master-0 kubenswrapper[4409]: I1203 14:26:44.744727 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"f457e0a0e04e69ea4ac0c62723ca5ff52f513224eeffe71ae4f3be9fe8bb4941"} Dec 03 14:26:44.748064 master-0 kubenswrapper[4409]: I1203 14:26:44.747824 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"6494bc244473023b4a7ecdc37b26605afdf9b12fd86ee533dac7eb3392c28710"} Dec 03 14:26:44.750957 master-0 kubenswrapper[4409]: I1203 14:26:44.750801 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"9c016f10-6cf2-4409-9365-05ae2e2adc5a","Type":"ContainerStarted","Data":"3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d"} Dec 03 14:26:44.761529 master-0 kubenswrapper[4409]: I1203 14:26:44.761441 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-69cc794c58-mfjk2" event={"ID":"ea5f8f90-b3ff-4f73-b2d7-6fcb7e5e6b7d","Type":"ContainerStarted","Data":"460cdb2aa4f0e978d71731c42ad31ebae9099dba9fe8fc73da6226c7a44e504f"} Dec 03 14:26:44.768452 master-0 kubenswrapper[4409]: I1203 14:26:44.768133 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"2cae1e90957cf00a5ccee6fd4f60bd1a08352e19ac707b180d8a15f99b068167"} Dec 03 14:26:44.768452 master-0 kubenswrapper[4409]: I1203 14:26:44.768225 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"a2b94099ba541a839e6779611baaacf3157931c06004082597169ba2510d5beb"} Dec 03 14:26:44.780240 master-0 kubenswrapper[4409]: I1203 14:26:44.779922 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"65e2a14ad1777fbc4327f825e8199cff3dd0740a310907b59b003b98a811b280"} Dec 03 14:26:44.789149 master-0 kubenswrapper[4409]: I1203 14:26:44.789048 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"696e0814360442ac77f713ed1623903fd5f583b2cf820fc7671059fe2ff12ad0"} Dec 03 14:26:44.789149 master-0 kubenswrapper[4409]: I1203 14:26:44.789142 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-86bh9" event={"ID":"5aa67ace-d03a-4d06-9fb5-24777b65f2cc","Type":"ContainerStarted","Data":"db3a7023556a85b99669d93632f1bfdc1d082c459f638773c8a87b5a10767eaf"} Dec 03 14:26:44.803907 master-0 kubenswrapper[4409]: I1203 14:26:44.803851 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"34a1383a0d256ec28148ffaf5ecc6426351def0d77e3b9d59ea7428cf00a8e21"} Dec 03 14:26:44.803907 master-0 kubenswrapper[4409]: I1203 14:26:44.803910 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" event={"ID":"55351b08-d46d-4327-aa5e-ae17fdffdfb5","Type":"ContainerStarted","Data":"986399195e927dda261be4e452c349a48da0f951b60f5190309de528569e16db"} Dec 03 14:26:44.804937 master-0 kubenswrapper[4409]: I1203 14:26:44.804871 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:44.819307 master-0 kubenswrapper[4409]: I1203 14:26:44.810770 4409 patch_prober.go:28] interesting pod/marketplace-operator-7d67745bb7-dwcxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Dec 03 14:26:44.819307 master-0 kubenswrapper[4409]: I1203 14:26:44.810868 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Dec 03 14:26:44.819307 master-0 kubenswrapper[4409]: I1203 14:26:44.814032 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"5f4fad2f1cba2bc0ce434c73c49ffd7165295c87572fb597be9de1b590cb7ea4"} Dec 03 14:26:44.819307 master-0 kubenswrapper[4409]: I1203 14:26:44.817680 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"10a1545a8a99d609477d51b930ae6c65fa2a82b37102a4c0c38f4a57186a9170"} Dec 03 14:26:44.819307 master-0 kubenswrapper[4409]: I1203 14:26:44.817713 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vkpv4" event={"ID":"e3675c78-1902-4b92-8a93-cf2dc316f060","Type":"ContainerStarted","Data":"a886acccedf4040c1e8ec6a8e935f91987b78236eeab5a619dc59a0be88dfd95"} Dec 03 14:26:44.831832 master-0 kubenswrapper[4409]: I1203 14:26:44.830754 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"691f97f7ff01ff94916acca8d9a365d5aae3fa03c0f9c0115a414d77003f64d7"} Dec 03 14:26:44.840783 master-0 kubenswrapper[4409]: I1203 14:26:44.840719 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"5ad64d2f2ef98c55bcd745af2339313e8ee438f55103919ced6be848db6c0d78"} Dec 03 14:26:44.840901 master-0 kubenswrapper[4409]: I1203 14:26:44.840790 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"871f9ab48b75f4b39ecfe93541c4294213240dbd7385d737e161557f4f978e07"} Dec 03 14:26:44.849886 master-0 kubenswrapper[4409]: I1203 14:26:44.849787 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"c7516eabb2115584bdfd30f4bc1164511a8e3d821e4e4c8a3f30cd75046e8b67"} Dec 03 14:26:44.865176 master-0 kubenswrapper[4409]: I1203 14:26:44.865117 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-4wqc6" Dec 03 14:26:44.900090 master-0 kubenswrapper[4409]: I1203 14:26:44.895687 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Dec 03 14:26:45.071170 master-0 kubenswrapper[4409]: W1203 14:26:45.071042 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b4c4f1f_d61e_483e_8c0b_6e2774437e4d.slice/crio-09a16d1815d944a050280257d3fa95c72fa86fded1694b650fb0b80a04ed5bca WatchSource:0}: Error finding container 09a16d1815d944a050280257d3fa95c72fa86fded1694b650fb0b80a04ed5bca: Status 404 returned error can't find the container with id 09a16d1815d944a050280257d3fa95c72fa86fded1694b650fb0b80a04ed5bca Dec 03 14:26:45.447170 master-0 kubenswrapper[4409]: I1203 14:26:45.444585 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:45.447170 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:45.447170 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:45.447170 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:45.447170 master-0 kubenswrapper[4409]: I1203 14:26:45.445061 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:45.618555 master-0 kubenswrapper[4409]: W1203 14:26:45.618078 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8dc6511_7339_4269_9d43_14ce53bb4e7f.slice/crio-b9aab5ccbbe6f73cfaef58c284cc470719e5c60ca9628c825beff947d2779cf7 WatchSource:0}: Error finding container b9aab5ccbbe6f73cfaef58c284cc470719e5c60ca9628c825beff947d2779cf7: Status 404 returned error can't find the container with id b9aab5ccbbe6f73cfaef58c284cc470719e5c60ca9628c825beff947d2779cf7 Dec 03 14:26:45.622368 master-0 kubenswrapper[4409]: W1203 14:26:45.622322 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b3c1fb_6f81_4067_98da_681d6c7c33e4.slice/crio-30cc7d802ef3a626c3e7cc51d687d6753b361cfd9e5f4483cd03fe3f75e8a5e8 WatchSource:0}: Error finding container 30cc7d802ef3a626c3e7cc51d687d6753b361cfd9e5f4483cd03fe3f75e8a5e8: Status 404 returned error can't find the container with id 30cc7d802ef3a626c3e7cc51d687d6753b361cfd9e5f4483cd03fe3f75e8a5e8 Dec 03 14:26:45.864797 master-0 kubenswrapper[4409]: I1203 14:26:45.864618 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"4a31b680223a25dc44071b18bd153a99304e6a44634adb1762417a2bad855419"} Dec 03 14:26:45.869376 master-0 kubenswrapper[4409]: I1203 14:26:45.869257 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"fd587a1bd6a7166e120109922329dd7577b68676c1997fa4b1898fc2d03fc4c4"} Dec 03 14:26:45.871618 master-0 kubenswrapper[4409]: I1203 14:26:45.871570 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"5034130f5528094d3786ebf253cfc80a0d7e2b8c483587c920c7b22f92e942b7"} Dec 03 14:26:45.873622 master-0 kubenswrapper[4409]: I1203 14:26:45.873556 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"9c016f10-6cf2-4409-9365-05ae2e2adc5a","Type":"ContainerStarted","Data":"13136c43d490bf4821772cae2a446c6c229e324bf44e67e338c736015813b8e8"} Dec 03 14:26:45.875842 master-0 kubenswrapper[4409]: I1203 14:26:45.875781 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"23277820e4b1e5977dad6b2eeb59f85914901db697b80c134e723751872c2446"} Dec 03 14:26:45.876047 master-0 kubenswrapper[4409]: I1203 14:26:45.875851 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"a454ed90167fb03c560e0538847491802736458f6443da3e1e96ab8dab06e323"} Dec 03 14:26:45.878966 master-0 kubenswrapper[4409]: I1203 14:26:45.878885 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"e63247e2358c03b1050776b853b99c30a63cb35cf223fd4734d4aa30d2cd96a4"} Dec 03 14:26:45.881097 master-0 kubenswrapper[4409]: I1203 14:26:45.881023 4409 generic.go:334] "Generic (PLEG): container finished" podID="5d838c1a-22e2-4096-9739-7841ef7d06ba" containerID="55c88d82b6f617d52e298522641242d1c46200c37e28c26e641d982f477d4876" exitCode=0 Dec 03 14:26:45.881412 master-0 kubenswrapper[4409]: I1203 14:26:45.881351 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerDied","Data":"55c88d82b6f617d52e298522641242d1c46200c37e28c26e641d982f477d4876"} Dec 03 14:26:45.882626 master-0 kubenswrapper[4409]: I1203 14:26:45.882559 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"aa23d818c0d25b263a4738729ad5acb0e2ddee5056ba24636c0e5dc6c82147e1"} Dec 03 14:26:45.884171 master-0 kubenswrapper[4409]: I1203 14:26:45.884124 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-kwb74" event={"ID":"b051ae27-7879-448d-b426-4dce76e29739","Type":"ContainerStarted","Data":"0b927ec5467d9842eb3cf33cde4f7338bc5639c9e6ecef826f223a04981df558"} Dec 03 14:26:45.886562 master-0 kubenswrapper[4409]: I1203 14:26:45.886439 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"2d10b2c0ecc970cf3a1cd85151829b3576c63089f9f58526f53121d44e8dffb1"} Dec 03 14:26:45.888392 master-0 kubenswrapper[4409]: I1203 14:26:45.888354 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"6404392c254b8a9f69c97fc410d37369dd61ef3cc8af7ef1506bbcc2315f675b"} Dec 03 14:26:45.920575 master-0 kubenswrapper[4409]: I1203 14:26:45.890842 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"a67ce0be97bec27fca829ede96a60be29f98a005cc6e32c7ef046eac65a10c2f"} Dec 03 14:26:45.920575 master-0 kubenswrapper[4409]: I1203 14:26:45.891590 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"8e15f70fb61c6eaf6b1c5c8ac96faaebcec1ef3a6043645e2892e887b0e441f7"} Dec 03 14:26:45.920575 master-0 kubenswrapper[4409]: I1203 14:26:45.893144 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"b2864a4f3ec1cde9adc5bca7771f99c5801328d5ec494e3508ff4b084ef8141f"} Dec 03 14:26:45.920575 master-0 kubenswrapper[4409]: I1203 14:26:45.893744 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"09a16d1815d944a050280257d3fa95c72fa86fded1694b650fb0b80a04ed5bca"} Dec 03 14:26:45.935927 master-0 kubenswrapper[4409]: I1203 14:26:45.935861 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"621eaade11248202b4139049e5dc01faf60659a00ab3909a10b61089386d8fc4"} Dec 03 14:26:45.939706 master-0 kubenswrapper[4409]: I1203 14:26:45.939636 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"92d4a760a85823f4430d0d3b9cb7d369ab910eceb30388c7fe37daf32269b882"} Dec 03 14:26:45.941163 master-0 kubenswrapper[4409]: I1203 14:26:45.941126 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"f3c1d71098b10def0795c6d580454c24be64b7860500b9d37b21bbfb4c2e82fe"} Dec 03 14:26:45.944400 master-0 kubenswrapper[4409]: I1203 14:26:45.944356 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-rrfsm" event={"ID":"04e9e2a5-cdc2-42af-ab2c-49525390be6d","Type":"ContainerStarted","Data":"3a4a1fb93ff0ef783f70baa1260bc2342c24d1cc6b1cbe66fbe08243a1d58e8f"} Dec 03 14:26:45.947551 master-0 kubenswrapper[4409]: I1203 14:26:45.947500 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-59d99f9b7b-74sss" event={"ID":"c95705e3-17ef-40fe-89e8-22586a32621b","Type":"ContainerStarted","Data":"8bed5c3c3ea4c97a8e2048953e6993899f182d695832e9d1687cf73513a08b27"} Dec 03 14:26:45.982924 master-0 kubenswrapper[4409]: I1203 14:26:45.982876 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerStarted","Data":"f4a042cfaf36063bf7c4b0b71d9f2332eee6124f35bb7fb25593036bcd8bdf7b"} Dec 03 14:26:45.989619 master-0 kubenswrapper[4409]: I1203 14:26:45.988757 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" event={"ID":"33a557d1-cdd9-47ff-afbd-a301e7f589a7","Type":"ContainerStarted","Data":"67e07c300e39f4e99ee3986a80fdc66fbdfba098769ef8f3e2149dda655332df"} Dec 03 14:26:46.003708 master-0 kubenswrapper[4409]: I1203 14:26:46.001623 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"6be147fe-84e2-429b-9d53-91fd67fef7c4","Type":"ContainerStarted","Data":"a8b042abc190e543cc898e3304f92c31d4f8880188b013310dc27ff11b6f74e3"} Dec 03 14:26:46.011290 master-0 kubenswrapper[4409]: I1203 14:26:46.011204 4409 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="e05ec0048f602190d797224f5ef02ba517ec017f1065dfde54dd13beb2ed3716" exitCode=0 Dec 03 14:26:46.011371 master-0 kubenswrapper[4409]: I1203 14:26:46.011284 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"e05ec0048f602190d797224f5ef02ba517ec017f1065dfde54dd13beb2ed3716"} Dec 03 14:26:46.011371 master-0 kubenswrapper[4409]: I1203 14:26:46.011338 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"6ae8191b5ac9cdef1130794960e91e672e15b26bcdc9fd6bb78450874a7f3565"} Dec 03 14:26:46.030972 master-0 kubenswrapper[4409]: I1203 14:26:46.029023 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"b7ba854c548ea9dd4ebe5163cbd57bf1d7e054426bcb36582f50c76ecc6b7641"} Dec 03 14:26:46.033508 master-0 kubenswrapper[4409]: I1203 14:26:46.032771 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"4d5ef7b6180ab660ee8950b66a12863de31d794ab345c8ed0114ebfd735a7ed4"} Dec 03 14:26:46.040326 master-0 kubenswrapper[4409]: I1203 14:26:46.040282 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"f7264c2499d90bb4b1b573391e8dd82bbf0a40507755a0d2a536312c9ed2ba57"} Dec 03 14:26:46.079804 master-0 kubenswrapper[4409]: I1203 14:26:46.079755 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-pcchm" event={"ID":"6d38d102-4efe-4ed3-ae23-b1e295cdaccd","Type":"ContainerStarted","Data":"bc4fbd1dd599a152d8975587303b090c14a7cfc5ba6902d5bcb8e9e377802c39"} Dec 03 14:26:46.092214 master-0 kubenswrapper[4409]: I1203 14:26:46.087197 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:26:46.097200 master-0 kubenswrapper[4409]: I1203 14:26:46.096948 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerStarted","Data":"1d42f7d5d372f9a3877b3b45dcf0372cf75642272766e45ab80af733ac039158"} Dec 03 14:26:46.103772 master-0 kubenswrapper[4409]: I1203 14:26:46.099109 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"de44829678408e2535e8068d88bcbceeb0e4425a7a8ada6a766e920133cafe08"} Dec 03 14:26:46.105713 master-0 kubenswrapper[4409]: I1203 14:26:46.105678 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"25c436b7156e59a69a650580e091139420f85f5df23414681f118d8e19936788"} Dec 03 14:26:46.108364 master-0 kubenswrapper[4409]: I1203 14:26:46.108327 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"30cc7d802ef3a626c3e7cc51d687d6753b361cfd9e5f4483cd03fe3f75e8a5e8"} Dec 03 14:26:46.112748 master-0 kubenswrapper[4409]: I1203 14:26:46.111754 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"1bfa52a568c1fb9d38c26f1bff18e8a5c25b3f79fa2585637a6b29619a683510"} Dec 03 14:26:46.113418 master-0 kubenswrapper[4409]: I1203 14:26:46.113110 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"f29e2f89f006a9a42201ade85b84c161605beb0d06ec366ac404f7c9528f4fe3"} Dec 03 14:26:46.114494 master-0 kubenswrapper[4409]: I1203 14:26:46.114468 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"17fe3fec29213b2cb5e58aaad747cb94b9b15fb46d120f43a6578fb518d43440"} Dec 03 14:26:46.125837 master-0 kubenswrapper[4409]: I1203 14:26:46.124486 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"2afaaeffb11cde952b3ce3a624d05ec4b30f0844b21d0e43fd1e88bf8c8983a5"} Dec 03 14:26:46.125837 master-0 kubenswrapper[4409]: I1203 14:26:46.125785 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"b8c59c579ea30cbc154e1815d25a67d102eade95a9d755c7f806df461a30fa62"} Dec 03 14:26:46.133828 master-0 kubenswrapper[4409]: I1203 14:26:46.133586 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"184b6b229c47676bb7ee1b8198d15f7df7192b02421d4284e2b69150cd65e03a"} Dec 03 14:26:46.134554 master-0 kubenswrapper[4409]: I1203 14:26:46.134494 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"8c5125a70ff0ca2242fcbc9587f965ec8db13c864fa3287fce03ae52c8f5d761"} Dec 03 14:26:46.138135 master-0 kubenswrapper[4409]: I1203 14:26:46.135833 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"9ac2094fcea4ef552ae7871831dca98a4ac7e2f993131f59886c2d7a532622d3"} Dec 03 14:26:46.138135 master-0 kubenswrapper[4409]: I1203 14:26:46.137060 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"ad56e41e9fc6b131a9786b417f8188d04fa09fe5df031f3a5d93e78abe6bd4c0"} Dec 03 14:26:46.140171 master-0 kubenswrapper[4409]: I1203 14:26:46.139220 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"03fd337947dd9a0afec95c9865c8dd62561d7804e6f79ecc81fc6817b53c22fc"} Dec 03 14:26:46.143364 master-0 kubenswrapper[4409]: I1203 14:26:46.143278 4409 generic.go:334] "Generic (PLEG): container finished" podID="56649bd4-ac30-4a70-8024-772294fede88" containerID="5ad64d2f2ef98c55bcd745af2339313e8ee438f55103919ced6be848db6c0d78" exitCode=0 Dec 03 14:26:46.143481 master-0 kubenswrapper[4409]: I1203 14:26:46.143449 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerDied","Data":"5ad64d2f2ef98c55bcd745af2339313e8ee438f55103919ced6be848db6c0d78"} Dec 03 14:26:46.144874 master-0 kubenswrapper[4409]: I1203 14:26:46.144836 4409 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:26:46.146777 master-0 kubenswrapper[4409]: I1203 14:26:46.146481 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"b9aab5ccbbe6f73cfaef58c284cc470719e5c60ca9628c825beff947d2779cf7"} Dec 03 14:26:46.147945 master-0 kubenswrapper[4409]: I1203 14:26:46.147904 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"60cdbccea04f269472b43bcab27e934dd50e408c74c9272233c7a6a8a5229eaa"} Dec 03 14:26:46.149054 master-0 kubenswrapper[4409]: I1203 14:26:46.149006 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"f4ec9f183afae5d8425ea746251a0ba03b0e4c73a3c69aeae4958f35ebc19c5e"} Dec 03 14:26:46.151621 master-0 kubenswrapper[4409]: I1203 14:26:46.151591 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"a6036119cb94018e32e483420b95ca21abd041bac6e8b0606820640580ab4f29"} Dec 03 14:26:46.151621 master-0 kubenswrapper[4409]: I1203 14:26:46.151622 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:46.153262 master-0 kubenswrapper[4409]: I1203 14:26:46.153219 4409 patch_prober.go:28] interesting pod/marketplace-operator-7d67745bb7-dwcxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" start-of-body= Dec 03 14:26:46.153335 master-0 kubenswrapper[4409]: I1203 14:26:46.153285 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" podUID="55351b08-d46d-4327-aa5e-ae17fdffdfb5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.21:8080/healthz\": dial tcp 10.128.0.21:8080: connect: connection refused" Dec 03 14:26:46.422060 master-0 kubenswrapper[4409]: I1203 14:26:46.420767 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:46.422060 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:46.422060 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:46.422060 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:46.422060 master-0 kubenswrapper[4409]: I1203 14:26:46.420844 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:46.634460 master-0 kubenswrapper[4409]: I1203 14:26:46.632574 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7c64dd9d8b-49skr" Dec 03 14:26:47.198040 master-0 kubenswrapper[4409]: I1203 14:26:47.197957 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5fdc576499-j2n8j" event={"ID":"690d1f81-7b1f-4fd0-9b6e-154c9687c744","Type":"ContainerStarted","Data":"9fbbd8676bdefcf959657cffabe14728551cc0b0ffa2dde7a9ed8d38bbc1c153"} Dec 03 14:26:47.206503 master-0 kubenswrapper[4409]: I1203 14:26:47.206441 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"ebf58f81fcfb834f6f3420c05cab835008ded0127613958da12e2e310a9cf5b0"} Dec 03 14:26:47.211249 master-0 kubenswrapper[4409]: I1203 14:26:47.211168 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-44frm" event={"ID":"c180b512-bf0c-4ddc-a5cf-f04acc830a61","Type":"ContainerStarted","Data":"7e4ccc4db07a0d4ff5064c7913e1bdd836e907a3c2a082eb733c0aa82f4951b3"} Dec 03 14:26:47.212982 master-0 kubenswrapper[4409]: I1203 14:26:47.212949 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"739115f0dc72763bfcd52e6d2f364f8f8491c19e6301922ca3a0c6b0b01d64f7"} Dec 03 14:26:47.216762 master-0 kubenswrapper[4409]: I1203 14:26:47.216725 4409 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="187bf2aad3aadf8a1359a568ae500b2b4fe576e5e180520ce08d5583947e5c18" exitCode=0 Dec 03 14:26:47.216836 master-0 kubenswrapper[4409]: I1203 14:26:47.216786 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"187bf2aad3aadf8a1359a568ae500b2b4fe576e5e180520ce08d5583947e5c18"} Dec 03 14:26:47.219646 master-0 kubenswrapper[4409]: I1203 14:26:47.219620 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-56f5898f45-fhnc5" event={"ID":"f1f2d0e1-eaaf-4037-a976-5fc2a942c50c","Type":"ContainerStarted","Data":"a9281942e948b091496d2f5c29c870c5e2b905da58e5b84d38ebf7242b811e12"} Dec 03 14:26:47.225349 master-0 kubenswrapper[4409]: I1203 14:26:47.225241 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"0d136381a4d40fe6f91a359301fa2d49f4b7c7d3484d43599a8bfc9a21e8faeb"} Dec 03 14:26:47.232555 master-0 kubenswrapper[4409]: I1203 14:26:47.232506 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"ddbf4c82eecc8abab5bcb5ffad67e5c56d5fe657a13b8339d1ddb828e3bc2420"} Dec 03 14:26:47.234066 master-0 kubenswrapper[4409]: I1203 14:26:47.233995 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7479ffdf48-hpdzl" event={"ID":"0535e784-8e28-4090-aa2e-df937910767c","Type":"ContainerStarted","Data":"deed5ba6c86ee7c199c01b5b54b5330a40ed7971e0c38880fa47578a9fee0c0c"} Dec 03 14:26:47.239227 master-0 kubenswrapper[4409]: I1203 14:26:47.239161 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerStarted","Data":"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d"} Dec 03 14:26:47.241152 master-0 kubenswrapper[4409]: I1203 14:26:47.241125 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-q2lxz" event={"ID":"918ff36b-662f-46ae-b71a-301df7e67735","Type":"ContainerStarted","Data":"bde7dd6968729819996e64ae260723ea8f33c4badf577cbc1e846fc03fd9cda4"} Dec 03 14:26:47.243782 master-0 kubenswrapper[4409]: I1203 14:26:47.242937 4409 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="2adf9d3819f63c35ca84de4104c888028f05f133ef9f29f2409d289aea525981" exitCode=0 Dec 03 14:26:47.243782 master-0 kubenswrapper[4409]: I1203 14:26:47.242988 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"2adf9d3819f63c35ca84de4104c888028f05f133ef9f29f2409d289aea525981"} Dec 03 14:26:47.245903 master-0 kubenswrapper[4409]: I1203 14:26:47.245840 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"5df62f14a94353685cb79bcd1bb8a7e1c022195ba61da4cf533e0062d5cf1ab7"} Dec 03 14:26:47.253836 master-0 kubenswrapper[4409]: I1203 14:26:47.253771 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" event={"ID":"b3eef3ef-f954-4e47-92b4-0155bc27332d","Type":"ContainerStarted","Data":"892010e31bca618d84daf610f2e4af4b70a49c5d1e170685890665be78195e63"} Dec 03 14:26:47.260160 master-0 kubenswrapper[4409]: I1203 14:26:47.254854 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:47.262415 master-0 kubenswrapper[4409]: I1203 14:26:47.262169 4409 patch_prober.go:28] interesting pod/olm-operator-76bd5d69c7-fjrrg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" start-of-body= Dec 03 14:26:47.262415 master-0 kubenswrapper[4409]: I1203 14:26:47.262240 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" podUID="b3eef3ef-f954-4e47-92b4-0155bc27332d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.59:8443/healthz\": dial tcp 10.128.0.59:8443: connect: connection refused" Dec 03 14:26:47.265907 master-0 kubenswrapper[4409]: I1203 14:26:47.265850 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"d7dc778d38632791654b2a856cacf76adb9b1240475548eab978e9e110057ebd"} Dec 03 14:26:47.279365 master-0 kubenswrapper[4409]: I1203 14:26:47.278669 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerDied","Data":"f041fa08601c0b8638e7d01e08c73a48747db670c2858495291615311f2211b3"} Dec 03 14:26:47.280038 master-0 kubenswrapper[4409]: I1203 14:26:47.278514 4409 generic.go:334] "Generic (PLEG): container finished" podID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerID="f041fa08601c0b8638e7d01e08c73a48747db670c2858495291615311f2211b3" exitCode=0 Dec 03 14:26:47.285173 master-0 kubenswrapper[4409]: I1203 14:26:47.285142 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" event={"ID":"33a557d1-cdd9-47ff-afbd-a301e7f589a7","Type":"ContainerStarted","Data":"978045aee86cf16288e9852741b4df39f2e31cbc335576592eb2735849f0fd11"} Dec 03 14:26:47.285889 master-0 kubenswrapper[4409]: I1203 14:26:47.285723 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:47.318063 master-0 kubenswrapper[4409]: I1203 14:26:47.317997 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-n7qz8" event={"ID":"1c562495-1290-4792-b4b2-639faa594ae2","Type":"ContainerStarted","Data":"02983c3ec6f6ea28e229fc1472cf61f4f038d2007a8f01b725b97d56b36cb9df"} Dec 03 14:26:47.335328 master-0 kubenswrapper[4409]: I1203 14:26:47.335290 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-s5s96" event={"ID":"06d774e5-314a-49df-bdca-8e780c9af25a","Type":"ContainerStarted","Data":"1ebc1e084cfcfa641f61373781bee2fd2439133a6acc9e192c6059ee0a3df1f4"} Dec 03 14:26:47.357331 master-0 kubenswrapper[4409]: I1203 14:26:47.357262 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ch7xd" event={"ID":"b3c1ebb9-f052-410b-a999-45e9b75b0e58","Type":"ContainerStarted","Data":"b05ef22992d6e0196f18d2df3d5b35e0b6e656aacbef76439a76a275776b1edd"} Dec 03 14:26:47.376400 master-0 kubenswrapper[4409]: I1203 14:26:47.375656 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-qqwh7" event={"ID":"63ae92a3-0ff8-4650-8a7b-e26e4c86c8f4","Type":"ContainerStarted","Data":"0d3f91162cd5932d6e0e2f83d61aa60ba74306c98bb5b7b3bc0f05b5999c42c2"} Dec 03 14:26:47.391881 master-0 kubenswrapper[4409]: I1203 14:26:47.389998 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-565bdcb8-477pk" event={"ID":"aa169e84-880b-4e6d-aeee-7ebfa1f613d2","Type":"ContainerStarted","Data":"a5242c06fd234116ee49495903d378cc75d2be425020b3c1507fc4d3f51e87ec"} Dec 03 14:26:47.404243 master-0 kubenswrapper[4409]: I1203 14:26:47.403995 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74cff6cf84-bh8rz" Dec 03 14:26:47.412995 master-0 kubenswrapper[4409]: I1203 14:26:47.412955 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-x278n" event={"ID":"ab40dfa2-d8f8-4300-8a10-5aa73e1d6294","Type":"ContainerStarted","Data":"70741aecfe365ae86fbca8e3d871254137f0ef1ccbad0ec3b2db6d0c39f0ad1a"} Dec 03 14:26:47.415683 master-0 kubenswrapper[4409]: I1203 14:26:47.415355 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"6be147fe-84e2-429b-9d53-91fd67fef7c4","Type":"ContainerStarted","Data":"3c8bb130f7f4440c0005df74de2e6848bedd186e4d57f11f59f1299881a614e0"} Dec 03 14:26:47.417834 master-0 kubenswrapper[4409]: I1203 14:26:47.417769 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"f834356bfe14fa623fe3c997b8a15a02ee3d90fd662c01f38fef8b4671e4eb78"} Dec 03 14:26:47.421217 master-0 kubenswrapper[4409]: I1203 14:26:47.421144 4409 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="33213d0872ea1b62a12a7ce40a434fb120c58f73f23d772a654a7d8ab159fde3" exitCode=0 Dec 03 14:26:47.421310 master-0 kubenswrapper[4409]: I1203 14:26:47.421243 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"33213d0872ea1b62a12a7ce40a434fb120c58f73f23d772a654a7d8ab159fde3"} Dec 03 14:26:47.432165 master-0 kubenswrapper[4409]: I1203 14:26:47.430238 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:47.432165 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:47.432165 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:47.432165 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:47.432165 master-0 kubenswrapper[4409]: I1203 14:26:47.430302 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:47.434868 master-0 kubenswrapper[4409]: I1203 14:26:47.434794 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"e9696c895c2632073c59c0a1658b983b9c6d1f567e30199ca1e44ade68e35a4f"} Dec 03 14:26:47.447531 master-0 kubenswrapper[4409]: I1203 14:26:47.447303 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"e5575b4759941704900676425b3df74f905514964b80b5e77eaae8627a96aada"} Dec 03 14:26:47.449829 master-0 kubenswrapper[4409]: I1203 14:26:47.449698 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" event={"ID":"e89bc996-818b-46b9-ad39-a12457acd4bb","Type":"ContainerStarted","Data":"e95ac9d5b45494ede52e67178c87d2a1079d56f8acc2525e00db1e797b7dc35d"} Dec 03 14:26:47.450911 master-0 kubenswrapper[4409]: I1203 14:26:47.450886 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:47.454871 master-0 kubenswrapper[4409]: I1203 14:26:47.454821 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5m4f8" event={"ID":"4669137a-fbc4-41e1-8eeb-5f06b9da2641","Type":"ContainerStarted","Data":"81615894004e7b09e4baf65a3d0d1e261176553f63ec5bab45a408e3bfc2ed99"} Dec 03 14:26:47.455078 master-0 kubenswrapper[4409]: I1203 14:26:47.455043 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:47.462692 master-0 kubenswrapper[4409]: I1203 14:26:47.462653 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d7ddcf759-pvkrm" Dec 03 14:26:48.432485 master-0 kubenswrapper[4409]: I1203 14:26:48.423830 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:48.432485 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:48.432485 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:48.432485 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:48.432485 master-0 kubenswrapper[4409]: I1203 14:26:48.423870 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:48.502829 master-0 kubenswrapper[4409]: I1203 14:26:48.502782 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-764cbf5554-kftwv" event={"ID":"829d285f-d532-45e4-b1ec-54adbc21b9f9","Type":"ContainerStarted","Data":"101faaa3289f43327003bb5073c4fe305925ffedecc58971187aed965d581455"} Dec 03 14:26:48.507896 master-0 kubenswrapper[4409]: I1203 14:26:48.505035 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"1119e219ae9f610db1c061db4ec10e010efc27169d0956a03f637b32cc374afe"} Dec 03 14:26:48.507896 master-0 kubenswrapper[4409]: I1203 14:26:48.507323 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"37074525a2e46b8141ad722e69bfd5577bbeadab06db4feb6ca5a63abd7920e9"} Dec 03 14:26:48.510206 master-0 kubenswrapper[4409]: I1203 14:26:48.509069 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" event={"ID":"b3068c10-8dd5-4bdb-b305-db2c9c9ef3ab","Type":"ContainerStarted","Data":"d400c59b2efb576641a9c20ff124d29bdf089241b3e168578d4b9045a78da0a2"} Dec 03 14:26:48.510206 master-0 kubenswrapper[4409]: I1203 14:26:48.509699 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:48.515869 master-0 kubenswrapper[4409]: I1203 14:26:48.510914 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"851bdb89d354cede6a8cce7464d06492b6051cc900bddaf590069885ce648d6c"} Dec 03 14:26:48.518757 master-0 kubenswrapper[4409]: I1203 14:26:48.518718 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"3cf9fe58f2e043e5aaa5cf32e73cb28c2ec6822bc951a7e98829113b0d69e5c3"} Dec 03 14:26:48.520272 master-0 kubenswrapper[4409]: I1203 14:26:48.520246 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"1c9d41a83ba5e8e991e84c81ca3b3d03eb33025d6d63cb665e375d519fa1620b"} Dec 03 14:26:48.522743 master-0 kubenswrapper[4409]: I1203 14:26:48.522708 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-747bdb58b5-mn76f" Dec 03 14:26:48.524694 master-0 kubenswrapper[4409]: I1203 14:26:48.524660 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" event={"ID":"a5b3c1fb-6f81-4067-98da-681d6c7c33e4","Type":"ContainerStarted","Data":"6b4a7065587ceb387f767c45427bac6e5ec05beb35f10869992cfce43506738f"} Dec 03 14:26:48.525118 master-0 kubenswrapper[4409]: I1203 14:26:48.525089 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:48.527497 master-0 kubenswrapper[4409]: I1203 14:26:48.527458 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-f84784664-ntb9w" event={"ID":"6b95a5a6-db93-4a58-aaff-3619d130c8cb","Type":"ContainerStarted","Data":"be804cd8f0a0217abb7ce615b08e6f17f4dbd2a5aeeb54ddb0b2b2ba7364e9a7"} Dec 03 14:26:48.541228 master-0 kubenswrapper[4409]: I1203 14:26:48.540966 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-zgm6l" Dec 03 14:26:48.542831 master-0 kubenswrapper[4409]: I1203 14:26:48.542082 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" event={"ID":"a8dc6511-7339-4269-9d43-14ce53bb4e7f","Type":"ContainerStarted","Data":"ef0e90f049627abd1b60d4ce0eccf4a4305f685e1c9ed066ae4de7f50a452d6b"} Dec 03 14:26:48.542991 master-0 kubenswrapper[4409]: I1203 14:26:48.542862 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:48.557637 master-0 kubenswrapper[4409]: I1203 14:26:48.557540 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"c1635649d21ced95e4ab80d8bed1d9f27a2a9a245134c648f8ba9268b7049f9f"} Dec 03 14:26:48.579046 master-0 kubenswrapper[4409]: I1203 14:26:48.577815 4409 generic.go:334] "Generic (PLEG): container finished" podID="0b4c4f1f-d61e-483e-8c0b-6e2774437e4d" containerID="5df62f14a94353685cb79bcd1bb8a7e1c022195ba61da4cf533e0062d5cf1ab7" exitCode=0 Dec 03 14:26:48.579046 master-0 kubenswrapper[4409]: I1203 14:26:48.577973 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerDied","Data":"5df62f14a94353685cb79bcd1bb8a7e1c022195ba61da4cf533e0062d5cf1ab7"} Dec 03 14:26:48.601874 master-0 kubenswrapper[4409]: I1203 14:26:48.597637 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"7e8469172eba002a2c4263d9bb2e3bcf32226879af4f402442b20a104f5e4346"} Dec 03 14:26:48.601874 master-0 kubenswrapper[4409]: I1203 14:26:48.597727 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-7f88444875-6dk29" event={"ID":"7663a25e-236d-4b1d-83ce-733ab146dee3","Type":"ContainerStarted","Data":"09d158bb3abfe423f7c6ce0669ebf6e2f2280fd86058c3cea4809adde12f0d68"} Dec 03 14:26:48.604767 master-0 kubenswrapper[4409]: I1203 14:26:48.603443 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"a093b5ed43f45e60171065ae88f363ec0e195bc6bc59666905d5b186cdf5dacb"} Dec 03 14:26:48.613619 master-0 kubenswrapper[4409]: I1203 14:26:48.613416 4409 generic.go:334] "Generic (PLEG): container finished" podID="24dfafc9-86a9-450e-ac62-a871138106c0" containerID="0a03339f0c2b570cbc0b2164f2fdd57d14e8544a4a7f450932e1abaf40e6f911" exitCode=0 Dec 03 14:26:48.613619 master-0 kubenswrapper[4409]: I1203 14:26:48.613555 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerDied","Data":"0a03339f0c2b570cbc0b2164f2fdd57d14e8544a4a7f450932e1abaf40e6f911"} Dec 03 14:26:48.621307 master-0 kubenswrapper[4409]: I1203 14:26:48.620391 4409 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="292d888acff68795cf93261f44dda434022341bac29014b9b4b596d0ef7303b9" exitCode=0 Dec 03 14:26:48.621307 master-0 kubenswrapper[4409]: I1203 14:26:48.620451 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"292d888acff68795cf93261f44dda434022341bac29014b9b4b596d0ef7303b9"} Dec 03 14:26:48.623315 master-0 kubenswrapper[4409]: I1203 14:26:48.623187 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-7978bf889c-n64v4" event={"ID":"52100521-67e9-40c9-887c-eda6560f06e0","Type":"ContainerStarted","Data":"df1bf7267a8f69924ddb670221d416d9994662e850eddf50bfc2d52bc6956c01"} Dec 03 14:26:48.637276 master-0 kubenswrapper[4409]: I1203 14:26:48.637177 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-6f5db8559b-96ljh" event={"ID":"6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d","Type":"ContainerStarted","Data":"d5988f208523011c5006fff30d19c8d0c665485a9d3fced641ceec1190f7ce9b"} Dec 03 14:26:48.637831 master-0 kubenswrapper[4409]: I1203 14:26:48.637802 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:26:48.648310 master-0 kubenswrapper[4409]: I1203 14:26:48.648261 4409 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:26:48.648464 master-0 kubenswrapper[4409]: I1203 14:26:48.648374 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:26:48.663930 master-0 kubenswrapper[4409]: I1203 14:26:48.661648 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-fjrrg" Dec 03 14:26:48.920069 master-0 kubenswrapper[4409]: E1203 14:26:48.919942 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee\": container with ID starting with 329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee not found: ID does not exist" containerID="329f86c396d464bc38c418b87773619b2eef8fc054593123b01a5e519b0845ee" Dec 03 14:26:49.096587 master-0 kubenswrapper[4409]: I1203 14:26:49.096549 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-77df56447c-vsrxx" Dec 03 14:26:49.147321 master-0 kubenswrapper[4409]: E1203 14:26:49.143243 4409 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803897bb_580e_4f7a_9be2_583fc607d1f6.slice/crio-conmon-b382bc2f7cf0e0b31e5a8139decb261306559703a063ef8daf8fb7eec43b6a66.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803897bb_580e_4f7a_9be2_583fc607d1f6.slice/crio-b382bc2f7cf0e0b31e5a8139decb261306559703a063ef8daf8fb7eec43b6a66.scope\": RecentStats: unable to find data in memory cache]" Dec 03 14:26:49.160093 master-0 kubenswrapper[4409]: E1203 14:26:49.160050 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9\": container with ID starting with f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9 not found: ID does not exist" containerID="f75bc1541f46f7154b7953ef54cb3f09f85f84dfbc4389fdbade0aef1b5832e9" Dec 03 14:26:49.326226 master-0 kubenswrapper[4409]: E1203 14:26:49.326147 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288\": container with ID starting with 797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288 not found: ID does not exist" containerID="797a39e1adb91318ca3fd3b85ba235c902b5047e70e1f5814af1b42f34206288" Dec 03 14:26:49.421222 master-0 kubenswrapper[4409]: I1203 14:26:49.421186 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:49.421222 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:49.421222 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:49.421222 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:49.421468 master-0 kubenswrapper[4409]: I1203 14:26:49.421441 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:49.707282 master-0 kubenswrapper[4409]: I1203 14:26:49.695435 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-7486ff55f-wcnxg" event={"ID":"e9f484c1-1564-49c7-a43d-bd8b971cea20","Type":"ContainerStarted","Data":"08f9f7cc14222d014a646229797a3f330053c335658673195f1098c121d6c7d1"} Dec 03 14:26:49.707282 master-0 kubenswrapper[4409]: I1203 14:26:49.698961 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" event={"ID":"82bd0ae5-b35d-47c8-b693-b27a9a56476d","Type":"ContainerStarted","Data":"be69133eb1caa26e59dd5efadb8dec6178c9def3455c8d10260640600fa0904c"} Dec 03 14:26:49.707282 master-0 kubenswrapper[4409]: I1203 14:26:49.699463 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:26:49.712039 master-0 kubenswrapper[4409]: I1203 14:26:49.709335 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"fc1731832263408f7cfe02a162ec818c74fa2a669fbd29a0825a3790b0906b3a"} Dec 03 14:26:49.712039 master-0 kubenswrapper[4409]: I1203 14:26:49.709385 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"e48cb5bfda8cacd416fb4fba1d0c6f7f9723839e3520b700fde9b288513ce468"} Dec 03 14:26:49.712039 master-0 kubenswrapper[4409]: I1203 14:26:49.711093 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"625aa1dac13b2dfa4b012b723cde9666aa4120b33b95a0b9bc6bd1c085dd246d"} Dec 03 14:26:49.723172 master-0 kubenswrapper[4409]: I1203 14:26:49.718408 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" event={"ID":"faa79e15-1875-4865-b5e0-aecd4c447bad","Type":"ContainerStarted","Data":"09ee278f7ab0bf16b48895f1e870a00ca645dcc2f7cbe38647d682d3c65c387a"} Dec 03 14:26:49.723172 master-0 kubenswrapper[4409]: I1203 14:26:49.719255 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:26:49.723172 master-0 kubenswrapper[4409]: I1203 14:26:49.721455 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"909e03b13dd7707120d6a1d52a6b63878a257ff1b0c1619f73dd1b41c0bbf224"} Dec 03 14:26:49.723172 master-0 kubenswrapper[4409]: I1203 14:26:49.721484 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"62bdff6b8f01624d6aafd68e25d802816c2ff36b81c1cd471893b9735a9ed7b9"} Dec 03 14:26:49.755011 master-0 kubenswrapper[4409]: I1203 14:26:49.754670 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"a0c67d79a5c34eb576bdaab1d2b5568f216800a7f45316386fa4c28b39f64fd6"} Dec 03 14:26:49.755011 master-0 kubenswrapper[4409]: I1203 14:26:49.754713 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-jjd7h" event={"ID":"b340553b-d483-4839-8328-518f27770832","Type":"ContainerStarted","Data":"8d0565be2ca0ffe54ca101a2db3adf4d88ed309d50c2180afc1be90f97c6b272"} Dec 03 14:26:49.772424 master-0 kubenswrapper[4409]: I1203 14:26:49.772225 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"8880a8db47d1cce9f6997bbfa93761278ec20de928a71ca80d12d37383e4f334"} Dec 03 14:26:49.778290 master-0 kubenswrapper[4409]: I1203 14:26:49.778236 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"e01fa9dafba1d2fb384ecaf48277a89a408a6e81bf72be4d0ce120207730354d"} Dec 03 14:26:49.871146 master-0 kubenswrapper[4409]: I1203 14:26:49.870686 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"39ef1e6f3b82cc537e39a8462e927b47a9eadb7c2186396ea1cf37de5ec33410"} Dec 03 14:26:49.871146 master-0 kubenswrapper[4409]: I1203 14:26:49.870729 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" event={"ID":"a969ddd4-e20d-4dd2-84f4-a140bac65df0","Type":"ContainerStarted","Data":"a3d91b716e911550349f4c48f873bc2ac7023aed0d91508aca0b6a05199a4145"} Dec 03 14:26:49.871837 master-0 kubenswrapper[4409]: I1203 14:26:49.871669 4409 generic.go:334] "Generic (PLEG): container finished" podID="803897bb-580e-4f7a-9be2-583fc607d1f6" containerID="b382bc2f7cf0e0b31e5a8139decb261306559703a063ef8daf8fb7eec43b6a66" exitCode=0 Dec 03 14:26:49.871837 master-0 kubenswrapper[4409]: I1203 14:26:49.871782 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerDied","Data":"b382bc2f7cf0e0b31e5a8139decb261306559703a063ef8daf8fb7eec43b6a66"} Dec 03 14:26:49.895244 master-0 kubenswrapper[4409]: I1203 14:26:49.893832 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-85dbd94574-8jfp5" event={"ID":"bcc78129-4a81-410e-9a42-b12043b5a75a","Type":"ContainerStarted","Data":"eaa866194f6fe739d74d51b8f4f23c099fd44b77df8ab42d71fb8512182c1c6e"} Dec 03 14:26:49.918629 master-0 kubenswrapper[4409]: I1203 14:26:49.917489 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" event={"ID":"24dfafc9-86a9-450e-ac62-a871138106c0","Type":"ContainerStarted","Data":"554dae82351587dd95a0ef74efad4e32e1bf6d9f5757d85d573dfc2703416d5b"} Dec 03 14:26:49.928529 master-0 kubenswrapper[4409]: I1203 14:26:49.928288 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"27962147d55efb1a6f64f6ef2d89a3a3209908f0d507e831cb6e5bf8608990ac"} Dec 03 14:26:49.935946 master-0 kubenswrapper[4409]: I1203 14:26:49.928339 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"cacc0c0d9a22abb68cbf1b621c4894b0af8c3a6a6301d17ff17adf3af8a63348"} Dec 03 14:26:49.962199 master-0 kubenswrapper[4409]: I1203 14:26:49.962081 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7dcc7f9bd6-68wml" event={"ID":"8eee1d96-2f58-41a6-ae51-c158b29fc813","Type":"ContainerStarted","Data":"f8bf3e3c64935360987689afb1bb64ea438d61685197030a44950b2eed7f3012"} Dec 03 14:26:50.013591 master-0 kubenswrapper[4409]: I1203 14:26:50.013448 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-6b7bcd6566-jh9m8" event={"ID":"98392f8e-0285-4bc3-95a9-d29033639ca3","Type":"ContainerStarted","Data":"f990c52663d672814ad7efc96ae50b3311cd2497b478941d5046a9df02313be5"} Dec 03 14:26:50.046438 master-0 kubenswrapper[4409]: I1203 14:26:50.046392 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"1935ae37287de2a453a86de3cfe8b0047858d3a8d6fa892d10aea098ab9cde37"} Dec 03 14:26:50.051686 master-0 kubenswrapper[4409]: I1203 14:26:50.051619 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" event={"ID":"0b4c4f1f-d61e-483e-8c0b-6e2774437e4d","Type":"ContainerStarted","Data":"b2e2237c146c2c229d5964690c3ce7d5e00031db1e95ed3d7c68237469315396"} Dec 03 14:26:50.057036 master-0 kubenswrapper[4409]: I1203 14:26:50.052440 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:50.067508 master-0 kubenswrapper[4409]: I1203 14:26:50.065138 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-tjwg8" event={"ID":"eefee934-ac6b-44e3-a6be-1ae62362ab4f","Type":"ContainerStarted","Data":"f948287c8c9990fb1243297d1a6c13af021d01d0d643cda3c26e195f5edcfd7d"} Dec 03 14:26:50.084327 master-0 kubenswrapper[4409]: I1203 14:26:50.082950 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" event={"ID":"69b752ed-691c-4574-a01e-428d4bf85b75","Type":"ContainerStarted","Data":"991e06a8086b73661a4e12d8c4947b425f01300dce90c3c099cf9eeff1cbfada"} Dec 03 14:26:50.084431 master-0 kubenswrapper[4409]: I1203 14:26:50.084390 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:26:50.109052 master-0 kubenswrapper[4409]: I1203 14:26:50.102523 4409 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:26:50.109052 master-0 kubenswrapper[4409]: I1203 14:26:50.102584 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:26:50.109052 master-0 kubenswrapper[4409]: I1203 14:26:50.103617 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-dvklg" event={"ID":"6f723d97-5c65-4ae7-9085-26db8b4f2f52","Type":"ContainerStarted","Data":"e369c50d97e64c0e5b16c22cee679d8e22640b6d7dee37d1b6405174a19caeac"} Dec 03 14:26:50.325468 master-0 kubenswrapper[4409]: I1203 14:26:50.325413 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager/2.log" Dec 03 14:26:50.426520 master-0 kubenswrapper[4409]: I1203 14:26:50.426463 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:50.426520 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:50.426520 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:50.426520 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:50.426977 master-0 kubenswrapper[4409]: I1203 14:26:50.426944 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:50.598048 master-0 kubenswrapper[4409]: I1203 14:26:50.597922 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/cluster-policy-controller/3.log" Dec 03 14:26:50.732353 master-0 kubenswrapper[4409]: I1203 14:26:50.728419 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/1.log" Dec 03 14:26:50.915664 master-0 kubenswrapper[4409]: I1203 14:26:50.915559 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-recovery-controller/1.log" Dec 03 14:26:50.918115 master-0 kubenswrapper[4409]: I1203 14:26:50.918073 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:50.935795 master-0 kubenswrapper[4409]: I1203 14:26:50.935751 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-547cc9cc49-kqs4k" Dec 03 14:26:51.112166 master-0 kubenswrapper[4409]: I1203 14:26:51.112108 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" event={"ID":"8a12409a-0be3-4023-9df3-a0f091aac8dc","Type":"ContainerStarted","Data":"4b0e8157825a605470ee69e6a918997b6033297c78c0d91cf77a73b6b0f62fe8"} Dec 03 14:26:51.112602 master-0 kubenswrapper[4409]: I1203 14:26:51.112559 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:51.115673 master-0 kubenswrapper[4409]: I1203 14:26:51.115613 4409 generic.go:334] "Generic (PLEG): container finished" podID="911f6333-cdb0-425c-b79b-f892444b7097" containerID="8880a8db47d1cce9f6997bbfa93761278ec20de928a71ca80d12d37383e4f334" exitCode=0 Dec 03 14:26:51.115781 master-0 kubenswrapper[4409]: I1203 14:26:51.115731 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerDied","Data":"8880a8db47d1cce9f6997bbfa93761278ec20de928a71ca80d12d37383e4f334"} Dec 03 14:26:51.121813 master-0 kubenswrapper[4409]: I1203 14:26:51.121682 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-5h2kn" event={"ID":"803897bb-580e-4f7a-9be2-583fc607d1f6","Type":"ContainerStarted","Data":"e9eca442dce310e36eff5268c47d7ed18d632e61bc945b87acc4c03bfbec5836"} Dec 03 14:26:51.126329 master-0 kubenswrapper[4409]: I1203 14:26:51.126269 4409 generic.go:334] "Generic (PLEG): container finished" podID="614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1" containerID="e69b9e640f0bbcee040fb8d96758fa544fac4dff5a6cf46df190e16512f4b782" exitCode=0 Dec 03 14:26:51.126603 master-0 kubenswrapper[4409]: I1203 14:26:51.126385 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerDied","Data":"e69b9e640f0bbcee040fb8d96758fa544fac4dff5a6cf46df190e16512f4b782"} Dec 03 14:26:51.127473 master-0 kubenswrapper[4409]: I1203 14:26:51.127418 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-cc996c4bd-j4hzr" Dec 03 14:26:51.128912 master-0 kubenswrapper[4409]: I1203 14:26:51.128874 4409 generic.go:334] "Generic (PLEG): container finished" podID="bff18a80-0b0f-40ab-862e-e8b1ab32040a" containerID="1935ae37287de2a453a86de3cfe8b0047858d3a8d6fa892d10aea098ab9cde37" exitCode=0 Dec 03 14:26:51.128999 master-0 kubenswrapper[4409]: I1203 14:26:51.128947 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerDied","Data":"1935ae37287de2a453a86de3cfe8b0047858d3a8d6fa892d10aea098ab9cde37"} Dec 03 14:26:51.157121 master-0 kubenswrapper[4409]: I1203 14:26:51.157056 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"482cd970e9e26e53726fa3a21e30070c3de1b0634fd2016af32a30d0e3e99763"} Dec 03 14:26:51.157121 master-0 kubenswrapper[4409]: I1203 14:26:51.157106 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"77df6af756cfb2a630ccf1a609b6fcc8bb8f39a0ba969151dd7f6cb5b31caaa2"} Dec 03 14:26:51.157121 master-0 kubenswrapper[4409]: I1203 14:26:51.157118 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"5d838c1a-22e2-4096-9739-7841ef7d06ba","Type":"ContainerStarted","Data":"7f838931dd6ec372e35a8c9a2db8fbe56717e26623d037c4a4c5e33097e3cba7"} Dec 03 14:26:51.166472 master-0 kubenswrapper[4409]: I1203 14:26:51.161298 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-57cbc648f8-q4cgg" event={"ID":"74e39dce-29d5-4b2a-ab19-386b6cdae94d","Type":"ContainerStarted","Data":"9ff560fe25b82a4c8dcb2ed7e9e7373650923f95805e2d18369d55daa660fae2"} Dec 03 14:26:51.166472 master-0 kubenswrapper[4409]: I1203 14:26:51.163813 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-7d67745bb7-dwcxb" Dec 03 14:26:51.167166 master-0 kubenswrapper[4409]: I1203 14:26:51.167091 4409 generic.go:334] "Generic (PLEG): container finished" podID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerID="625aa1dac13b2dfa4b012b723cde9666aa4120b33b95a0b9bc6bd1c085dd246d" exitCode=0 Dec 03 14:26:51.167563 master-0 kubenswrapper[4409]: I1203 14:26:51.167174 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerDied","Data":"625aa1dac13b2dfa4b012b723cde9666aa4120b33b95a0b9bc6bd1c085dd246d"} Dec 03 14:26:51.167563 master-0 kubenswrapper[4409]: I1203 14:26:51.167214 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t8rt7" event={"ID":"a192c38a-4bfa-40fe-9a2d-d48260cf6443","Type":"ContainerStarted","Data":"0f4bb28872e7e81cbf8859e32a8afaef4cef342f01e3806e3cd6718e40425b1a"} Dec 03 14:26:51.175245 master-0 kubenswrapper[4409]: I1203 14:26:51.175190 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"b9088b5ad8c1c7a4609df21ed0c2ce8d617763be428b4bea5429a067277db1cb"} Dec 03 14:26:51.175328 master-0 kubenswrapper[4409]: I1203 14:26:51.175269 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"141a0b6c7fa0e142f216f4fda3d46790e1abc970166de9bf398e8b784f51b2a0"} Dec 03 14:26:51.175328 master-0 kubenswrapper[4409]: I1203 14:26:51.175290 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"56649bd4-ac30-4a70-8024-772294fede88","Type":"ContainerStarted","Data":"ed6f6a993dda9f1966f44ef9c5ce31d32fffe520a63e84c15d2d513a407beb7c"} Dec 03 14:26:51.177863 master-0 kubenswrapper[4409]: I1203 14:26:51.177774 4409 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:26:51.177974 master-0 kubenswrapper[4409]: I1203 14:26:51.177907 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:26:51.186891 master-0 kubenswrapper[4409]: I1203 14:26:51.186827 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-68c95b6cf5-fmdmz" Dec 03 14:26:51.206418 master-0 kubenswrapper[4409]: I1203 14:26:51.206370 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:51.206418 master-0 kubenswrapper[4409]: I1203 14:26:51.206423 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:26:51.221479 master-0 kubenswrapper[4409]: I1203 14:26:51.221421 4409 patch_prober.go:28] interesting pod/console-6c9c84854-xf7nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" start-of-body= Dec 03 14:26:51.221731 master-0 kubenswrapper[4409]: I1203 14:26:51.221493 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" probeResult="failure" output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" Dec 03 14:26:51.316612 master-0 kubenswrapper[4409]: I1203 14:26:51.316570 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-b5dddf8f5-kwb74_b051ae27-7879-448d-b426-4dce76e29739/kube-controller-manager-operator/3.log" Dec 03 14:26:51.361765 master-0 kubenswrapper[4409]: I1203 14:26:51.361708 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:51.362324 master-0 kubenswrapper[4409]: I1203 14:26:51.362281 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: I1203 14:26:51.369700 4409 patch_prober.go:28] interesting pod/apiserver-6985f84b49-v9vlg container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]log ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]etcd ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/generic-apiserver-start-informers ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/max-in-flight-filter ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/project.openshift.io-projectcache ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/openshift.io-startinformers ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 03 14:26:51.369756 master-0 kubenswrapper[4409]: livez check failed Dec 03 14:26:51.370627 master-0 kubenswrapper[4409]: I1203 14:26:51.369772 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" podUID="a969ddd4-e20d-4dd2-84f4-a140bac65df0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:51.392115 master-0 kubenswrapper[4409]: I1203 14:26:51.392044 4409 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:26:51.392343 master-0 kubenswrapper[4409]: I1203 14:26:51.392135 4409 patch_prober.go:28] interesting pod/downloads-6f5db8559b-96ljh container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Dec 03 14:26:51.392343 master-0 kubenswrapper[4409]: I1203 14:26:51.392129 4409 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:26:51.392343 master-0 kubenswrapper[4409]: I1203 14:26:51.392198 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-6f5db8559b-96ljh" podUID="6dd61097-7ea1-4d1d-9d4d-a781a0a59e7d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Dec 03 14:26:51.422866 master-0 kubenswrapper[4409]: I1203 14:26:51.422173 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:51.422866 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:51.422866 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:51.422866 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:51.422866 master-0 kubenswrapper[4409]: I1203 14:26:51.422251 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:51.888300 master-0 kubenswrapper[4409]: I1203 14:26:51.888213 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:51.888300 master-0 kubenswrapper[4409]: I1203 14:26:51.888298 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:52.223515 master-0 kubenswrapper[4409]: I1203 14:26:52.223198 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:52.223515 master-0 kubenswrapper[4409]: I1203 14:26:52.223320 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:26:52.416649 master-0 kubenswrapper[4409]: I1203 14:26:52.416529 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:52.416649 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:52.416649 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:52.416649 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:52.417105 master-0 kubenswrapper[4409]: I1203 14:26:52.416651 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:53.278808 master-0 kubenswrapper[4409]: I1203 14:26:53.278601 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t8rt7" podUID="a192c38a-4bfa-40fe-9a2d-d48260cf6443" containerName="registry-server" probeResult="failure" output=< Dec 03 14:26:53.278808 master-0 kubenswrapper[4409]: timeout: failed to connect service ":50051" within 1s Dec 03 14:26:53.278808 master-0 kubenswrapper[4409]: > Dec 03 14:26:53.418123 master-0 kubenswrapper[4409]: I1203 14:26:53.417991 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:53.418123 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:53.418123 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:53.418123 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:53.418615 master-0 kubenswrapper[4409]: I1203 14:26:53.418159 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:54.417351 master-0 kubenswrapper[4409]: I1203 14:26:54.417252 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:54.417351 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:54.417351 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:54.417351 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:54.418335 master-0 kubenswrapper[4409]: I1203 14:26:54.417351 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:55.203753 master-0 kubenswrapper[4409]: I1203 14:26:55.203676 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6z4sc" event={"ID":"911f6333-cdb0-425c-b79b-f892444b7097","Type":"ContainerStarted","Data":"abac9b15e931cb6df15767cf2d44a0277e5f268ad732ba3537bbb5e707ab75ea"} Dec 03 14:26:55.375639 master-0 kubenswrapper[4409]: I1203 14:26:55.375543 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:55.383888 master-0 kubenswrapper[4409]: I1203 14:26:55.383830 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-57fd58bc7b-kktql" Dec 03 14:26:55.417176 master-0 kubenswrapper[4409]: I1203 14:26:55.417112 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:55.417176 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:55.417176 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:55.417176 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:55.417867 master-0 kubenswrapper[4409]: I1203 14:26:55.417187 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:55.596836 master-0 kubenswrapper[4409]: I1203 14:26:55.596759 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/setup/2.log" Dec 03 14:26:55.602942 master-0 kubenswrapper[4409]: I1203 14:26:55.602903 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b495b0c38f2c54e7cc46282c5f92aab5/kube-rbac-proxy-crio/7.log" Dec 03 14:26:55.667511 master-0 kubenswrapper[4409]: I1203 14:26:55.667444 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-74cddd4fb5-phk6r_8c6fa89f-268c-477b-9f04-238d2305cc89/machine-config-controller/3.log" Dec 03 14:26:55.677548 master-0 kubenswrapper[4409]: I1203 14:26:55.677503 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-74cddd4fb5-phk6r_8c6fa89f-268c-477b-9f04-238d2305cc89/kube-rbac-proxy/3.log" Dec 03 14:26:55.703442 master-0 kubenswrapper[4409]: I1203 14:26:55.703400 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2ztl9_799e819f-f4b2-4ac9-8fa4-7d4da7a79285/machine-config-daemon/6.log" Dec 03 14:26:55.708834 master-0 kubenswrapper[4409]: I1203 14:26:55.708786 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-2ztl9_799e819f-f4b2-4ac9-8fa4-7d4da7a79285/kube-rbac-proxy/4.log" Dec 03 14:26:55.934988 master-0 kubenswrapper[4409]: I1203 14:26:55.934921 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5m4f8" Dec 03 14:26:56.013425 master-0 kubenswrapper[4409]: I1203 14:26:56.013369 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-664c9d94c9-9vfr4_4df2889c-99f7-402a-9d50-18ccf427179c/machine-config-operator/3.log" Dec 03 14:26:56.135468 master-0 kubenswrapper[4409]: I1203 14:26:56.135293 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:26:56.207172 master-0 kubenswrapper[4409]: I1203 14:26:56.207114 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-664c9d94c9-9vfr4_4df2889c-99f7-402a-9d50-18ccf427179c/kube-rbac-proxy/3.log" Dec 03 14:26:56.215661 master-0 kubenswrapper[4409]: I1203 14:26:56.215587 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7fwtv" event={"ID":"bff18a80-0b0f-40ab-862e-e8b1ab32040a","Type":"ContainerStarted","Data":"afc6fdcda207ed598d72b7425103535b9d56db6e5ffb2a44f3a952f06466a61a"} Dec 03 14:26:56.220899 master-0 kubenswrapper[4409]: I1203 14:26:56.220826 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ddwmn" event={"ID":"614a9f32-4ee8-4503-b3f1-e7ee78c6e6e1","Type":"ContainerStarted","Data":"6f47c423ec453424024908a98ef68fa052e7e0c67881687bfdcca03bcc0ee8f8"} Dec 03 14:26:56.369312 master-0 kubenswrapper[4409]: I1203 14:26:56.369242 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:56.377548 master-0 kubenswrapper[4409]: I1203 14:26:56.377489 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-6985f84b49-v9vlg" Dec 03 14:26:56.417471 master-0 kubenswrapper[4409]: I1203 14:26:56.417288 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:56.417471 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:56.417471 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:56.417471 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:56.418817 master-0 kubenswrapper[4409]: I1203 14:26:56.417459 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:56.608278 master-0 kubenswrapper[4409]: I1203 14:26:56.608226 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-pvrfs_eecc43f5-708f-4395-98cc-696b243d6321/machine-config-server/6.log" Dec 03 14:26:57.417706 master-0 kubenswrapper[4409]: I1203 14:26:57.417608 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:57.417706 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:57.417706 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:57.417706 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:57.417706 master-0 kubenswrapper[4409]: I1203 14:26:57.417703 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:58.417444 master-0 kubenswrapper[4409]: I1203 14:26:58.417319 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:58.417444 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:58.417444 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:58.417444 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:58.418142 master-0 kubenswrapper[4409]: I1203 14:26:58.417501 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:26:59.417371 master-0 kubenswrapper[4409]: I1203 14:26:59.417322 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:26:59.417371 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:26:59.417371 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:26:59.417371 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:26:59.417676 master-0 kubenswrapper[4409]: I1203 14:26:59.417390 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:00.417858 master-0 kubenswrapper[4409]: I1203 14:27:00.417579 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:00.417858 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:00.417858 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:00.417858 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:00.418502 master-0 kubenswrapper[4409]: I1203 14:27:00.417900 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:00.829204 master-0 kubenswrapper[4409]: I1203 14:27:00.829144 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:27:00.829204 master-0 kubenswrapper[4409]: I1203 14:27:00.829203 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:27:00.975706 master-0 kubenswrapper[4409]: I1203 14:27:00.975622 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:27:01.015020 master-0 kubenswrapper[4409]: I1203 14:27:01.014951 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:27:01.147229 master-0 kubenswrapper[4409]: I1203 14:27:01.147105 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-754cfd84-qf898" Dec 03 14:27:01.163622 master-0 kubenswrapper[4409]: I1203 14:27:01.163591 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 03 14:27:01.204524 master-0 kubenswrapper[4409]: I1203 14:27:01.204458 4409 patch_prober.go:28] interesting pod/console-6c9c84854-xf7nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" start-of-body= Dec 03 14:27:01.204706 master-0 kubenswrapper[4409]: I1203 14:27:01.204545 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" probeResult="failure" output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" Dec 03 14:27:01.368744 master-0 kubenswrapper[4409]: I1203 14:27:01.368367 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:27:01.368744 master-0 kubenswrapper[4409]: I1203 14:27:01.368408 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:27:01.412181 master-0 kubenswrapper[4409]: I1203 14:27:01.412032 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-6f5db8559b-96ljh" Dec 03 14:27:01.416833 master-0 kubenswrapper[4409]: I1203 14:27:01.416666 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:01.416833 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:01.416833 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:01.416833 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:01.416833 master-0 kubenswrapper[4409]: I1203 14:27:01.416771 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:01.421481 master-0 kubenswrapper[4409]: I1203 14:27:01.421443 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:27:01.835986 master-0 kubenswrapper[4409]: I1203 14:27:01.835934 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" Dec 03 14:27:01.857596 master-0 kubenswrapper[4409]: I1203 14:27:01.857537 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:27:01.857991 master-0 kubenswrapper[4409]: I1203 14:27:01.857972 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:27:01.903109 master-0 kubenswrapper[4409]: I1203 14:27:01.902944 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:27:02.215961 master-0 kubenswrapper[4409]: I1203 14:27:02.215766 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:27:02.215961 master-0 kubenswrapper[4409]: I1203 14:27:02.215889 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:27:02.266567 master-0 kubenswrapper[4409]: I1203 14:27:02.266502 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:27:02.267759 master-0 kubenswrapper[4409]: I1203 14:27:02.267737 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:27:02.308495 master-0 kubenswrapper[4409]: I1203 14:27:02.308429 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t8rt7" Dec 03 14:27:02.318161 master-0 kubenswrapper[4409]: I1203 14:27:02.318082 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6z4sc" Dec 03 14:27:02.318656 master-0 kubenswrapper[4409]: I1203 14:27:02.318616 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7fwtv" Dec 03 14:27:02.324781 master-0 kubenswrapper[4409]: I1203 14:27:02.324699 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ddwmn" Dec 03 14:27:02.417098 master-0 kubenswrapper[4409]: I1203 14:27:02.417021 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:02.417098 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:02.417098 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:02.417098 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:02.417449 master-0 kubenswrapper[4409]: I1203 14:27:02.417129 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:03.418036 master-0 kubenswrapper[4409]: I1203 14:27:03.417930 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:03.418036 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:03.418036 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:03.418036 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:03.419136 master-0 kubenswrapper[4409]: I1203 14:27:03.418072 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:04.415522 master-0 kubenswrapper[4409]: I1203 14:27:04.415459 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:04.415522 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:04.415522 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:04.415522 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:04.415875 master-0 kubenswrapper[4409]: I1203 14:27:04.415548 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:05.417383 master-0 kubenswrapper[4409]: I1203 14:27:05.417316 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:05.417383 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:05.417383 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:05.417383 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:05.417958 master-0 kubenswrapper[4409]: I1203 14:27:05.417396 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:06.415782 master-0 kubenswrapper[4409]: I1203 14:27:06.415698 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:06.415782 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:06.415782 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:06.415782 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:06.416250 master-0 kubenswrapper[4409]: I1203 14:27:06.415778 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:07.416725 master-0 kubenswrapper[4409]: I1203 14:27:07.416604 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:07.416725 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:07.416725 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:07.416725 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:07.417793 master-0 kubenswrapper[4409]: I1203 14:27:07.416743 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:08.415734 master-0 kubenswrapper[4409]: I1203 14:27:08.415674 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:08.415734 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:08.415734 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:08.415734 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:08.416150 master-0 kubenswrapper[4409]: I1203 14:27:08.415750 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:09.416035 master-0 kubenswrapper[4409]: I1203 14:27:09.415961 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:09.416035 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:09.416035 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:09.416035 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:09.416883 master-0 kubenswrapper[4409]: I1203 14:27:09.416065 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:09.531355 master-0 kubenswrapper[4409]: I1203 14:27:09.531277 4409 trace.go:236] Trace[2100682986]: "Calculate volume metrics of cache for pod openshift-operator-controller/operator-controller-controller-manager-5f78c89466-bshxw" (03-Dec-2025 14:27:07.786) (total time: 1745ms): Dec 03 14:27:09.531355 master-0 kubenswrapper[4409]: Trace[2100682986]: [1.745046614s] [1.745046614s] END Dec 03 14:27:10.416383 master-0 kubenswrapper[4409]: I1203 14:27:10.416280 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:10.416383 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:10.416383 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:10.416383 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:10.417000 master-0 kubenswrapper[4409]: I1203 14:27:10.416407 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:11.204899 master-0 kubenswrapper[4409]: I1203 14:27:11.204786 4409 patch_prober.go:28] interesting pod/console-6c9c84854-xf7nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" start-of-body= Dec 03 14:27:11.204899 master-0 kubenswrapper[4409]: I1203 14:27:11.204878 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" probeResult="failure" output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" Dec 03 14:27:11.416218 master-0 kubenswrapper[4409]: I1203 14:27:11.416094 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:11.416218 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:11.416218 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:11.416218 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:11.417309 master-0 kubenswrapper[4409]: I1203 14:27:11.416221 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:12.415948 master-0 kubenswrapper[4409]: I1203 14:27:12.415883 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:12.415948 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:12.415948 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:12.415948 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:12.416303 master-0 kubenswrapper[4409]: I1203 14:27:12.415976 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:13.417702 master-0 kubenswrapper[4409]: I1203 14:27:13.417262 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:13.417702 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:13.417702 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:13.417702 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:13.417702 master-0 kubenswrapper[4409]: I1203 14:27:13.417359 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:14.420254 master-0 kubenswrapper[4409]: I1203 14:27:14.420144 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:14.420254 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:14.420254 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:14.420254 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:14.420903 master-0 kubenswrapper[4409]: I1203 14:27:14.420281 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:15.416193 master-0 kubenswrapper[4409]: I1203 14:27:15.416044 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:15.416193 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:15.416193 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:15.416193 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:15.416193 master-0 kubenswrapper[4409]: I1203 14:27:15.416110 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:16.416438 master-0 kubenswrapper[4409]: I1203 14:27:16.416322 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:16.416438 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:16.416438 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:16.416438 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:16.417078 master-0 kubenswrapper[4409]: I1203 14:27:16.416469 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:17.417288 master-0 kubenswrapper[4409]: I1203 14:27:17.417147 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:17.417288 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:17.417288 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:17.417288 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:17.418612 master-0 kubenswrapper[4409]: I1203 14:27:17.417387 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:18.416991 master-0 kubenswrapper[4409]: I1203 14:27:18.416739 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:18.416991 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:18.416991 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:18.416991 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:18.416991 master-0 kubenswrapper[4409]: I1203 14:27:18.416821 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:19.416372 master-0 kubenswrapper[4409]: I1203 14:27:19.416290 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:19.416372 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:19.416372 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:19.416372 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:19.416372 master-0 kubenswrapper[4409]: I1203 14:27:19.416375 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:20.416119 master-0 kubenswrapper[4409]: I1203 14:27:20.416060 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:20.416119 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:20.416119 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:20.416119 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:20.416700 master-0 kubenswrapper[4409]: I1203 14:27:20.416132 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:20.835595 master-0 kubenswrapper[4409]: I1203 14:27:20.835542 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:27:20.839116 master-0 kubenswrapper[4409]: I1203 14:27:20.839063 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-555496955b-vpcbs" Dec 03 14:27:21.204422 master-0 kubenswrapper[4409]: I1203 14:27:21.204245 4409 patch_prober.go:28] interesting pod/console-6c9c84854-xf7nv container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" start-of-body= Dec 03 14:27:21.204422 master-0 kubenswrapper[4409]: I1203 14:27:21.204351 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" probeResult="failure" output="Get \"https://10.128.0.62:8443/health\": dial tcp 10.128.0.62:8443: connect: connection refused" Dec 03 14:27:21.307678 master-0 kubenswrapper[4409]: I1203 14:27:21.307530 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-h599p" Dec 03 14:27:21.416024 master-0 kubenswrapper[4409]: I1203 14:27:21.415949 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:21.416024 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:21.416024 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:21.416024 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:21.416786 master-0 kubenswrapper[4409]: I1203 14:27:21.416034 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:21.992442 master-0 kubenswrapper[4409]: I1203 14:27:21.992370 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-pcchm" Dec 03 14:27:22.418002 master-0 kubenswrapper[4409]: I1203 14:27:22.417886 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:22.418002 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:22.418002 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:22.418002 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:22.418002 master-0 kubenswrapper[4409]: I1203 14:27:22.417992 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:23.418771 master-0 kubenswrapper[4409]: I1203 14:27:23.418640 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:23.418771 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:23.418771 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:23.418771 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:23.419609 master-0 kubenswrapper[4409]: I1203 14:27:23.418822 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:24.418965 master-0 kubenswrapper[4409]: I1203 14:27:24.418832 4409 patch_prober.go:28] interesting pod/router-default-54f97f57-rr9px container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 03 14:27:24.418965 master-0 kubenswrapper[4409]: [-]has-synced failed: reason withheld Dec 03 14:27:24.418965 master-0 kubenswrapper[4409]: [+]process-running ok Dec 03 14:27:24.418965 master-0 kubenswrapper[4409]: healthz check failed Dec 03 14:27:24.420290 master-0 kubenswrapper[4409]: I1203 14:27:24.418987 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-54f97f57-rr9px" podUID="5c00a797-4c60-43dd-bd04-16b2c6f1b6a8" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 03 14:27:25.424397 master-0 kubenswrapper[4409]: I1203 14:27:25.422316 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:27:25.431634 master-0 kubenswrapper[4409]: I1203 14:27:25.431562 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-54f97f57-rr9px" Dec 03 14:27:27.135059 master-0 kubenswrapper[4409]: I1203 14:27:27.134391 4409 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:27:27.135059 master-0 kubenswrapper[4409]: I1203 14:27:27.134490 4409 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:27:27.135059 master-0 kubenswrapper[4409]: I1203 14:27:27.134947 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler" containerID="cri-o://d0a827a444c38d75c515a416cb1a917a642fb70a7523efb53087345e0bb3e2ee" gracePeriod=30 Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: E1203 14:27:27.135146 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135197 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: E1203 14:27:27.135220 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-recovery-controller" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135227 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-recovery-controller" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: E1203 14:27:27.135236 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-cert-syncer" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135242 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-cert-syncer" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: E1203 14:27:27.135249 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="wait-for-host-port" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135255 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="wait-for-host-port" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135253 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-recovery-controller" containerID="cri-o://05a2610f6bca4defc9b7ede8255a1c063ebe53f7d07ab7227fcf2edbc056b331" gracePeriod=30 Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135329 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-cert-syncer" containerID="cri-o://2d61d8802bbc570d04dd9977fb07dd6294b8212bfe0e7176af3f6ce67f85ee5a" gracePeriod=30 Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135412 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-cert-syncer" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135441 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler-recovery-controller" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135460 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="kube-scheduler" Dec 03 14:27:27.135841 master-0 kubenswrapper[4409]: I1203 14:27:27.135470 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd2fa610bb2a39c39fcdd00db03a511a" containerName="wait-for-host-port" Dec 03 14:27:27.184106 master-0 kubenswrapper[4409]: I1203 14:27:27.183396 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.184106 master-0 kubenswrapper[4409]: I1203 14:27:27.183457 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.190066 master-0 kubenswrapper[4409]: I1203 14:27:27.186319 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="fd2fa610bb2a39c39fcdd00db03a511a" podUID="1c7a19409131b64a7c59b2108929f94e" Dec 03 14:27:27.285460 master-0 kubenswrapper[4409]: I1203 14:27:27.285413 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.285627 master-0 kubenswrapper[4409]: I1203 14:27:27.285609 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.285925 master-0 kubenswrapper[4409]: I1203 14:27:27.285904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.286154 master-0 kubenswrapper[4409]: I1203 14:27:27.286112 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1c7a19409131b64a7c59b2108929f94e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1c7a19409131b64a7c59b2108929f94e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.352941 master-0 kubenswrapper[4409]: I1203 14:27:27.352890 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/3.log" Dec 03 14:27:27.353746 master-0 kubenswrapper[4409]: I1203 14:27:27.353712 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.357888 master-0 kubenswrapper[4409]: I1203 14:27:27.357849 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="fd2fa610bb2a39c39fcdd00db03a511a" podUID="1c7a19409131b64a7c59b2108929f94e" Dec 03 14:27:27.386947 master-0 kubenswrapper[4409]: I1203 14:27:27.386823 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") pod \"fd2fa610bb2a39c39fcdd00db03a511a\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " Dec 03 14:27:27.386947 master-0 kubenswrapper[4409]: I1203 14:27:27.386911 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") pod \"fd2fa610bb2a39c39fcdd00db03a511a\" (UID: \"fd2fa610bb2a39c39fcdd00db03a511a\") " Dec 03 14:27:27.387494 master-0 kubenswrapper[4409]: I1203 14:27:27.387417 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "fd2fa610bb2a39c39fcdd00db03a511a" (UID: "fd2fa610bb2a39c39fcdd00db03a511a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:27.387842 master-0 kubenswrapper[4409]: I1203 14:27:27.387358 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "fd2fa610bb2a39c39fcdd00db03a511a" (UID: "fd2fa610bb2a39c39fcdd00db03a511a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:27.391705 master-0 kubenswrapper[4409]: I1203 14:27:27.389622 4409 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:27.464746 master-0 kubenswrapper[4409]: I1203 14:27:27.464665 4409 generic.go:334] "Generic (PLEG): container finished" podID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" containerID="13136c43d490bf4821772cae2a446c6c229e324bf44e67e338c736015813b8e8" exitCode=0 Dec 03 14:27:27.464746 master-0 kubenswrapper[4409]: I1203 14:27:27.464733 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"9c016f10-6cf2-4409-9365-05ae2e2adc5a","Type":"ContainerDied","Data":"13136c43d490bf4821772cae2a446c6c229e324bf44e67e338c736015813b8e8"} Dec 03 14:27:27.467642 master-0 kubenswrapper[4409]: I1203 14:27:27.467598 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_fd2fa610bb2a39c39fcdd00db03a511a/kube-scheduler-cert-syncer/3.log" Dec 03 14:27:27.468477 master-0 kubenswrapper[4409]: I1203 14:27:27.468414 4409 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="05a2610f6bca4defc9b7ede8255a1c063ebe53f7d07ab7227fcf2edbc056b331" exitCode=0 Dec 03 14:27:27.468477 master-0 kubenswrapper[4409]: I1203 14:27:27.468466 4409 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="2d61d8802bbc570d04dd9977fb07dd6294b8212bfe0e7176af3f6ce67f85ee5a" exitCode=2 Dec 03 14:27:27.468477 master-0 kubenswrapper[4409]: I1203 14:27:27.468474 4409 generic.go:334] "Generic (PLEG): container finished" podID="fd2fa610bb2a39c39fcdd00db03a511a" containerID="d0a827a444c38d75c515a416cb1a917a642fb70a7523efb53087345e0bb3e2ee" exitCode=0 Dec 03 14:27:27.468603 master-0 kubenswrapper[4409]: I1203 14:27:27.468523 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:27.468603 master-0 kubenswrapper[4409]: I1203 14:27:27.468541 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54" Dec 03 14:27:27.485998 master-0 kubenswrapper[4409]: I1203 14:27:27.485943 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="fd2fa610bb2a39c39fcdd00db03a511a" podUID="1c7a19409131b64a7c59b2108929f94e" Dec 03 14:27:27.493216 master-0 kubenswrapper[4409]: I1203 14:27:27.493156 4409 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/fd2fa610bb2a39c39fcdd00db03a511a-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:27.500958 master-0 kubenswrapper[4409]: I1203 14:27:27.500861 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="fd2fa610bb2a39c39fcdd00db03a511a" podUID="1c7a19409131b64a7c59b2108929f94e" Dec 03 14:27:27.583924 master-0 kubenswrapper[4409]: E1203 14:27:27.583847 4409 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2fa610bb2a39c39fcdd00db03a511a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2fa610bb2a39c39fcdd00db03a511a.slice/crio-40c3dc6aea4d9d99d20c3bf9cc34f1a768d3a2906321ea72f53917d5fa356e54\": RecentStats: unable to find data in memory cache]" Dec 03 14:27:27.823772 master-0 kubenswrapper[4409]: I1203 14:27:27.823688 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd2fa610bb2a39c39fcdd00db03a511a" path="/var/lib/kubelet/pods/fd2fa610bb2a39c39fcdd00db03a511a/volumes" Dec 03 14:27:28.895822 master-0 kubenswrapper[4409]: I1203 14:27:28.895727 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:27:29.019232 master-0 kubenswrapper[4409]: I1203 14:27:29.019124 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") pod \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " Dec 03 14:27:29.019506 master-0 kubenswrapper[4409]: I1203 14:27:29.019293 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") pod \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " Dec 03 14:27:29.019506 master-0 kubenswrapper[4409]: I1203 14:27:29.019387 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock" (OuterVolumeSpecName: "var-lock") pod "9c016f10-6cf2-4409-9365-05ae2e2adc5a" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:29.019601 master-0 kubenswrapper[4409]: I1203 14:27:29.019571 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") pod \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\" (UID: \"9c016f10-6cf2-4409-9365-05ae2e2adc5a\") " Dec 03 14:27:29.019809 master-0 kubenswrapper[4409]: I1203 14:27:29.019768 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9c016f10-6cf2-4409-9365-05ae2e2adc5a" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:29.020068 master-0 kubenswrapper[4409]: I1203 14:27:29.020035 4409 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:29.020068 master-0 kubenswrapper[4409]: I1203 14:27:29.020057 4409 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:29.022675 master-0 kubenswrapper[4409]: I1203 14:27:29.022616 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9c016f10-6cf2-4409-9365-05ae2e2adc5a" (UID: "9c016f10-6cf2-4409-9365-05ae2e2adc5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:27:29.121574 master-0 kubenswrapper[4409]: I1203 14:27:29.121334 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9c016f10-6cf2-4409-9365-05ae2e2adc5a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:29.484718 master-0 kubenswrapper[4409]: I1203 14:27:29.484587 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"9c016f10-6cf2-4409-9365-05ae2e2adc5a","Type":"ContainerDied","Data":"3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d"} Dec 03 14:27:29.484718 master-0 kubenswrapper[4409]: I1203 14:27:29.484641 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3349da23706544021c863a15a9c86ef148e2a53ef9e8a8774efd419a83a8796d" Dec 03 14:27:29.484718 master-0 kubenswrapper[4409]: I1203 14:27:29.484654 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Dec 03 14:27:31.211099 master-0 kubenswrapper[4409]: I1203 14:27:31.210729 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:27:31.215617 master-0 kubenswrapper[4409]: I1203 14:27:31.215570 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:27:32.341528 master-0 kubenswrapper[4409]: I1203 14:27:32.341460 4409 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:27:32.342339 master-0 kubenswrapper[4409]: E1203 14:27:32.341973 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" containerName="installer" Dec 03 14:27:32.342339 master-0 kubenswrapper[4409]: I1203 14:27:32.341991 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" containerName="installer" Dec 03 14:27:32.342339 master-0 kubenswrapper[4409]: I1203 14:27:32.342161 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c016f10-6cf2-4409-9365-05ae2e2adc5a" containerName="installer" Dec 03 14:27:32.342777 master-0 kubenswrapper[4409]: I1203 14:27:32.342751 4409 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:27:32.343087 master-0 kubenswrapper[4409]: I1203 14:27:32.343022 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.343202 master-0 kubenswrapper[4409]: I1203 14:27:32.343127 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" containerID="cri-o://18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c" gracePeriod=15 Dec 03 14:27:32.343280 master-0 kubenswrapper[4409]: I1203 14:27:32.343200 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214" gracePeriod=15 Dec 03 14:27:32.343323 master-0 kubenswrapper[4409]: I1203 14:27:32.343233 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-syncer" containerID="cri-o://4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7" gracePeriod=15 Dec 03 14:27:32.343496 master-0 kubenswrapper[4409]: I1203 14:27:32.343200 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde" gracePeriod=15 Dec 03 14:27:32.343613 master-0 kubenswrapper[4409]: I1203 14:27:32.343200 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438" gracePeriod=15 Dec 03 14:27:32.344183 master-0 kubenswrapper[4409]: I1203 14:27:32.344153 4409 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:27:32.344646 master-0 kubenswrapper[4409]: E1203 14:27:32.344630 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" Dec 03 14:27:32.344687 master-0 kubenswrapper[4409]: I1203 14:27:32.344652 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" Dec 03 14:27:32.344687 master-0 kubenswrapper[4409]: E1203 14:27:32.344669 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:27:32.344687 master-0 kubenswrapper[4409]: I1203 14:27:32.344678 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: E1203 14:27:32.344699 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="setup" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: I1203 14:27:32.344710 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="setup" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: E1203 14:27:32.344725 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-check-endpoints" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: I1203 14:27:32.344748 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-check-endpoints" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: E1203 14:27:32.344779 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-syncer" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: I1203 14:27:32.344789 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-syncer" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: E1203 14:27:32.344799 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:27:32.344817 master-0 kubenswrapper[4409]: I1203 14:27:32.344808 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:27:32.345114 master-0 kubenswrapper[4409]: I1203 14:27:32.345026 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-syncer" Dec 03 14:27:32.345114 master-0 kubenswrapper[4409]: I1203 14:27:32.345062 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-cert-regeneration-controller" Dec 03 14:27:32.345114 master-0 kubenswrapper[4409]: I1203 14:27:32.345081 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-insecure-readyz" Dec 03 14:27:32.345114 master-0 kubenswrapper[4409]: I1203 14:27:32.345096 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="setup" Dec 03 14:27:32.345350 master-0 kubenswrapper[4409]: I1203 14:27:32.345111 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver-check-endpoints" Dec 03 14:27:32.345350 master-0 kubenswrapper[4409]: I1203 14:27:32.345148 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00233b22d19df39b2e1c8ba133b3c2" containerName="kube-apiserver" Dec 03 14:27:32.384998 master-0 kubenswrapper[4409]: I1203 14:27:32.384950 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.385131 master-0 kubenswrapper[4409]: I1203 14:27:32.385095 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.385170 master-0 kubenswrapper[4409]: I1203 14:27:32.385150 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.385408 master-0 kubenswrapper[4409]: I1203 14:27:32.385327 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.385628 master-0 kubenswrapper[4409]: I1203 14:27:32.385573 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.385727 master-0 kubenswrapper[4409]: I1203 14:27:32.385694 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.385884 master-0 kubenswrapper[4409]: I1203 14:27:32.385845 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.385952 master-0 kubenswrapper[4409]: I1203 14:27:32.385927 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.398674 master-0 kubenswrapper[4409]: I1203 14:27:32.398614 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.487528 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.487673 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.487690 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.487786 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488070 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488089 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488119 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488110 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488167 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488134 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.488301 master-0 kubenswrapper[4409]: I1203 14:27:32.488305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488798 master-0 kubenswrapper[4409]: I1203 14:27:32.488394 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488798 master-0 kubenswrapper[4409]: I1203 14:27:32.488520 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.488798 master-0 kubenswrapper[4409]: I1203 14:27:32.488554 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488798 master-0 kubenswrapper[4409]: I1203 14:27:32.488562 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.488798 master-0 kubenswrapper[4409]: I1203 14:27:32.488683 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dcee84e50e891b3b96d641a4f9f6a202-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"dcee84e50e891b3b96d641a4f9f6a202\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:32.513520 master-0 kubenswrapper[4409]: I1203 14:27:32.512780 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_8a00233b22d19df39b2e1c8ba133b3c2/kube-apiserver-cert-syncer/2.log" Dec 03 14:27:32.513520 master-0 kubenswrapper[4409]: I1203 14:27:32.513511 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214" exitCode=0 Dec 03 14:27:32.513755 master-0 kubenswrapper[4409]: I1203 14:27:32.513531 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde" exitCode=0 Dec 03 14:27:32.513755 master-0 kubenswrapper[4409]: I1203 14:27:32.513565 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438" exitCode=0 Dec 03 14:27:32.513755 master-0 kubenswrapper[4409]: I1203 14:27:32.513575 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7" exitCode=2 Dec 03 14:27:32.695581 master-0 kubenswrapper[4409]: I1203 14:27:32.695444 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:27:32.732463 master-0 kubenswrapper[4409]: E1203 14:27:32.732287 4409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.187dbad4f09899a1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ff63f5c90356b311bfd02f62719f7c37,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:27:32.731468193 +0000 UTC m=+85.058530709,LastTimestamp:2025-12-03 14:27:32.731468193 +0000 UTC m=+85.058530709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:27:33.523029 master-0 kubenswrapper[4409]: I1203 14:27:33.522295 4409 generic.go:334] "Generic (PLEG): container finished" podID="6be147fe-84e2-429b-9d53-91fd67fef7c4" containerID="3c8bb130f7f4440c0005df74de2e6848bedd186e4d57f11f59f1299881a614e0" exitCode=0 Dec 03 14:27:33.523029 master-0 kubenswrapper[4409]: I1203 14:27:33.522377 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"6be147fe-84e2-429b-9d53-91fd67fef7c4","Type":"ContainerDied","Data":"3c8bb130f7f4440c0005df74de2e6848bedd186e4d57f11f59f1299881a614e0"} Dec 03 14:27:33.526298 master-0 kubenswrapper[4409]: I1203 14:27:33.524683 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:33.526298 master-0 kubenswrapper[4409]: I1203 14:27:33.526125 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:33.526458 master-0 kubenswrapper[4409]: I1203 14:27:33.526386 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ff63f5c90356b311bfd02f62719f7c37","Type":"ContainerStarted","Data":"7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7"} Dec 03 14:27:33.526458 master-0 kubenswrapper[4409]: I1203 14:27:33.526438 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ff63f5c90356b311bfd02f62719f7c37","Type":"ContainerStarted","Data":"a76d0d210f8e68d1713cbe0c52dd5ef563cff16d5ca9674e5046bf8a12b7e7a2"} Dec 03 14:27:33.533033 master-0 kubenswrapper[4409]: I1203 14:27:33.530497 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:33.533033 master-0 kubenswrapper[4409]: I1203 14:27:33.532220 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:33.537022 master-0 kubenswrapper[4409]: I1203 14:27:33.534233 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:33.537022 master-0 kubenswrapper[4409]: I1203 14:27:33.535843 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.747464 master-0 kubenswrapper[4409]: I1203 14:27:34.747383 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_8a00233b22d19df39b2e1c8ba133b3c2/kube-apiserver-cert-syncer/2.log" Dec 03 14:27:34.749070 master-0 kubenswrapper[4409]: I1203 14:27:34.749001 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:34.750221 master-0 kubenswrapper[4409]: I1203 14:27:34.750174 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.750750 master-0 kubenswrapper[4409]: I1203 14:27:34.750705 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.751291 master-0 kubenswrapper[4409]: I1203 14:27:34.751236 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.821680 master-0 kubenswrapper[4409]: I1203 14:27:34.821636 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") pod \"8a00233b22d19df39b2e1c8ba133b3c2\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " Dec 03 14:27:34.821680 master-0 kubenswrapper[4409]: I1203 14:27:34.821680 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") pod \"8a00233b22d19df39b2e1c8ba133b3c2\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " Dec 03 14:27:34.821680 master-0 kubenswrapper[4409]: I1203 14:27:34.821702 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") pod \"8a00233b22d19df39b2e1c8ba133b3c2\" (UID: \"8a00233b22d19df39b2e1c8ba133b3c2\") " Dec 03 14:27:34.822158 master-0 kubenswrapper[4409]: I1203 14:27:34.821826 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8a00233b22d19df39b2e1c8ba133b3c2" (UID: "8a00233b22d19df39b2e1c8ba133b3c2"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:34.822158 master-0 kubenswrapper[4409]: I1203 14:27:34.821933 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8a00233b22d19df39b2e1c8ba133b3c2" (UID: "8a00233b22d19df39b2e1c8ba133b3c2"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:34.822158 master-0 kubenswrapper[4409]: I1203 14:27:34.821956 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "8a00233b22d19df39b2e1c8ba133b3c2" (UID: "8a00233b22d19df39b2e1c8ba133b3c2"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:34.822377 master-0 kubenswrapper[4409]: I1203 14:27:34.822346 4409 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:34.822377 master-0 kubenswrapper[4409]: I1203 14:27:34.822378 4409 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-audit-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:34.822486 master-0 kubenswrapper[4409]: I1203 14:27:34.822391 4409 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8a00233b22d19df39b2e1c8ba133b3c2-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:34.905531 master-0 kubenswrapper[4409]: I1203 14:27:34.905454 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:27:34.906832 master-0 kubenswrapper[4409]: I1203 14:27:34.906677 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.907549 master-0 kubenswrapper[4409]: I1203 14:27:34.907488 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:34.908201 master-0 kubenswrapper[4409]: I1203 14:27:34.908136 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.025706 master-0 kubenswrapper[4409]: I1203 14:27:35.025579 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") pod \"6be147fe-84e2-429b-9d53-91fd67fef7c4\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " Dec 03 14:27:35.026061 master-0 kubenswrapper[4409]: I1203 14:27:35.025734 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6be147fe-84e2-429b-9d53-91fd67fef7c4" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:35.026061 master-0 kubenswrapper[4409]: I1203 14:27:35.025774 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") pod \"6be147fe-84e2-429b-9d53-91fd67fef7c4\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " Dec 03 14:27:35.026061 master-0 kubenswrapper[4409]: I1203 14:27:35.025837 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") pod \"6be147fe-84e2-429b-9d53-91fd67fef7c4\" (UID: \"6be147fe-84e2-429b-9d53-91fd67fef7c4\") " Dec 03 14:27:35.026061 master-0 kubenswrapper[4409]: I1203 14:27:35.025979 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock" (OuterVolumeSpecName: "var-lock") pod "6be147fe-84e2-429b-9d53-91fd67fef7c4" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:27:35.026619 master-0 kubenswrapper[4409]: I1203 14:27:35.026197 4409 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:35.026619 master-0 kubenswrapper[4409]: I1203 14:27:35.026221 4409 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6be147fe-84e2-429b-9d53-91fd67fef7c4-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:35.030847 master-0 kubenswrapper[4409]: I1203 14:27:35.030788 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6be147fe-84e2-429b-9d53-91fd67fef7c4" (UID: "6be147fe-84e2-429b-9d53-91fd67fef7c4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:27:35.128636 master-0 kubenswrapper[4409]: I1203 14:27:35.128531 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6be147fe-84e2-429b-9d53-91fd67fef7c4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:27:35.513391 master-0 kubenswrapper[4409]: E1203 14:27:35.513304 4409 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.515213 master-0 kubenswrapper[4409]: E1203 14:27:35.515159 4409 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.515872 master-0 kubenswrapper[4409]: E1203 14:27:35.515806 4409 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.516390 master-0 kubenswrapper[4409]: E1203 14:27:35.516346 4409 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.516911 master-0 kubenswrapper[4409]: E1203 14:27:35.516866 4409 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.516961 master-0 kubenswrapper[4409]: I1203 14:27:35.516913 4409 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 03 14:27:35.517368 master-0 kubenswrapper[4409]: E1203 14:27:35.517321 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Dec 03 14:27:35.545955 master-0 kubenswrapper[4409]: I1203 14:27:35.545909 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_8a00233b22d19df39b2e1c8ba133b3c2/kube-apiserver-cert-syncer/2.log" Dec 03 14:27:35.546941 master-0 kubenswrapper[4409]: I1203 14:27:35.546822 4409 generic.go:334] "Generic (PLEG): container finished" podID="8a00233b22d19df39b2e1c8ba133b3c2" containerID="18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c" exitCode=0 Dec 03 14:27:35.547094 master-0 kubenswrapper[4409]: I1203 14:27:35.546944 4409 scope.go:117] "RemoveContainer" containerID="ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214" Dec 03 14:27:35.547094 master-0 kubenswrapper[4409]: I1203 14:27:35.546947 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:35.549826 master-0 kubenswrapper[4409]: I1203 14:27:35.549762 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"6be147fe-84e2-429b-9d53-91fd67fef7c4","Type":"ContainerDied","Data":"a8b042abc190e543cc898e3304f92c31d4f8880188b013310dc27ff11b6f74e3"} Dec 03 14:27:35.549826 master-0 kubenswrapper[4409]: I1203 14:27:35.549811 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b042abc190e543cc898e3304f92c31d4f8880188b013310dc27ff11b6f74e3" Dec 03 14:27:35.549826 master-0 kubenswrapper[4409]: I1203 14:27:35.549789 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Dec 03 14:27:35.564191 master-0 kubenswrapper[4409]: I1203 14:27:35.564145 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.564709 master-0 kubenswrapper[4409]: I1203 14:27:35.564623 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.565247 master-0 kubenswrapper[4409]: I1203 14:27:35.565184 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.568768 master-0 kubenswrapper[4409]: I1203 14:27:35.568731 4409 scope.go:117] "RemoveContainer" containerID="94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde" Dec 03 14:27:35.569802 master-0 kubenswrapper[4409]: I1203 14:27:35.569742 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.570446 master-0 kubenswrapper[4409]: I1203 14:27:35.570388 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.571020 master-0 kubenswrapper[4409]: I1203 14:27:35.570942 4409 status_manager.go:851] "Failed to get status for pod" podUID="8a00233b22d19df39b2e1c8ba133b3c2" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:35.587475 master-0 kubenswrapper[4409]: I1203 14:27:35.587406 4409 scope.go:117] "RemoveContainer" containerID="3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438" Dec 03 14:27:35.606658 master-0 kubenswrapper[4409]: I1203 14:27:35.606559 4409 scope.go:117] "RemoveContainer" containerID="4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7" Dec 03 14:27:35.625408 master-0 kubenswrapper[4409]: I1203 14:27:35.625368 4409 scope.go:117] "RemoveContainer" containerID="18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c" Dec 03 14:27:35.646648 master-0 kubenswrapper[4409]: I1203 14:27:35.646528 4409 scope.go:117] "RemoveContainer" containerID="9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d" Dec 03 14:27:35.678739 master-0 kubenswrapper[4409]: I1203 14:27:35.678646 4409 scope.go:117] "RemoveContainer" containerID="ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214" Dec 03 14:27:35.679603 master-0 kubenswrapper[4409]: E1203 14:27:35.679545 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214\": container with ID starting with ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214 not found: ID does not exist" containerID="ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214" Dec 03 14:27:35.679681 master-0 kubenswrapper[4409]: I1203 14:27:35.679610 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214"} err="failed to get container status \"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214\": rpc error: code = NotFound desc = could not find container \"ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214\": container with ID starting with ba9455685a56f3f41149d0d3f563b63462bb591e40876752e725eaf7b876e214 not found: ID does not exist" Dec 03 14:27:35.679738 master-0 kubenswrapper[4409]: I1203 14:27:35.679686 4409 scope.go:117] "RemoveContainer" containerID="94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde" Dec 03 14:27:35.680633 master-0 kubenswrapper[4409]: E1203 14:27:35.680593 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde\": container with ID starting with 94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde not found: ID does not exist" containerID="94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde" Dec 03 14:27:35.680702 master-0 kubenswrapper[4409]: I1203 14:27:35.680629 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde"} err="failed to get container status \"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde\": rpc error: code = NotFound desc = could not find container \"94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde\": container with ID starting with 94df77daedda5cb8116ff8e57a175ab4193092b16652e8e2dd2551ec1cf8bbde not found: ID does not exist" Dec 03 14:27:35.680702 master-0 kubenswrapper[4409]: I1203 14:27:35.680656 4409 scope.go:117] "RemoveContainer" containerID="3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438" Dec 03 14:27:35.681351 master-0 kubenswrapper[4409]: E1203 14:27:35.681288 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438\": container with ID starting with 3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438 not found: ID does not exist" containerID="3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438" Dec 03 14:27:35.681431 master-0 kubenswrapper[4409]: I1203 14:27:35.681345 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438"} err="failed to get container status \"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438\": rpc error: code = NotFound desc = could not find container \"3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438\": container with ID starting with 3de5fae5ea0bc25ae52062524fd84bd855a178f2f32db4d5ad42c59e36da6438 not found: ID does not exist" Dec 03 14:27:35.681431 master-0 kubenswrapper[4409]: I1203 14:27:35.681386 4409 scope.go:117] "RemoveContainer" containerID="4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7" Dec 03 14:27:35.681928 master-0 kubenswrapper[4409]: E1203 14:27:35.681903 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7\": container with ID starting with 4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7 not found: ID does not exist" containerID="4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7" Dec 03 14:27:35.682024 master-0 kubenswrapper[4409]: I1203 14:27:35.681932 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7"} err="failed to get container status \"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7\": rpc error: code = NotFound desc = could not find container \"4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7\": container with ID starting with 4af9358effec401a66d4e7b0efb4727245c3ea497e6adbc701f583c46390a5b7 not found: ID does not exist" Dec 03 14:27:35.682024 master-0 kubenswrapper[4409]: I1203 14:27:35.681958 4409 scope.go:117] "RemoveContainer" containerID="18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c" Dec 03 14:27:35.682423 master-0 kubenswrapper[4409]: E1203 14:27:35.682391 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c\": container with ID starting with 18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c not found: ID does not exist" containerID="18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c" Dec 03 14:27:35.682492 master-0 kubenswrapper[4409]: I1203 14:27:35.682428 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c"} err="failed to get container status \"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c\": rpc error: code = NotFound desc = could not find container \"18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c\": container with ID starting with 18ed48ab0b410f20700a8e7635664a6fdea7a6e74fc5dc53600f1405dd94063c not found: ID does not exist" Dec 03 14:27:35.682492 master-0 kubenswrapper[4409]: I1203 14:27:35.682448 4409 scope.go:117] "RemoveContainer" containerID="9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d" Dec 03 14:27:35.682913 master-0 kubenswrapper[4409]: E1203 14:27:35.682888 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d\": container with ID starting with 9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d not found: ID does not exist" containerID="9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d" Dec 03 14:27:35.682913 master-0 kubenswrapper[4409]: I1203 14:27:35.682915 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d"} err="failed to get container status \"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d\": rpc error: code = NotFound desc = could not find container \"9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d\": container with ID starting with 9a6c3a9ba776e2c50e5a5cbb5ef6fa0682723635bde89e05d2517f1e727e857d not found: ID does not exist" Dec 03 14:27:35.719301 master-0 kubenswrapper[4409]: E1203 14:27:35.719227 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Dec 03 14:27:35.833916 master-0 kubenswrapper[4409]: I1203 14:27:35.832126 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a00233b22d19df39b2e1c8ba133b3c2" path="/var/lib/kubelet/pods/8a00233b22d19df39b2e1c8ba133b3c2/volumes" Dec 03 14:27:36.125387 master-0 kubenswrapper[4409]: E1203 14:27:36.125291 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Dec 03 14:27:36.928403 master-0 kubenswrapper[4409]: E1203 14:27:36.928288 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Dec 03 14:27:37.821645 master-0 kubenswrapper[4409]: I1203 14:27:37.821529 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:37.823154 master-0 kubenswrapper[4409]: I1203 14:27:37.823042 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:38.531226 master-0 kubenswrapper[4409]: E1203 14:27:38.530943 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Dec 03 14:27:40.081293 master-0 kubenswrapper[4409]: E1203 14:27:40.080981 4409 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.187dbad4f09899a1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:ff63f5c90356b311bfd02f62719f7c37,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2025-12-03 14:27:32.731468193 +0000 UTC m=+85.058530709,LastTimestamp:2025-12-03 14:27:32.731468193 +0000 UTC m=+85.058530709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Dec 03 14:27:41.733376 master-0 kubenswrapper[4409]: E1203 14:27:41.733257 4409 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Dec 03 14:27:42.814761 master-0 kubenswrapper[4409]: I1203 14:27:42.814686 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:42.816161 master-0 kubenswrapper[4409]: I1203 14:27:42.816052 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:42.816660 master-0 kubenswrapper[4409]: I1203 14:27:42.816619 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:42.830370 master-0 kubenswrapper[4409]: I1203 14:27:42.830082 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:42.830370 master-0 kubenswrapper[4409]: I1203 14:27:42.830131 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:42.831036 master-0 kubenswrapper[4409]: E1203 14:27:42.830973 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:42.831490 master-0 kubenswrapper[4409]: I1203 14:27:42.831452 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:43.612845 master-0 kubenswrapper[4409]: I1203 14:27:43.612697 4409 generic.go:334] "Generic (PLEG): container finished" podID="1c7a19409131b64a7c59b2108929f94e" containerID="0e110ce1ac08bbf01020fd2e8a108c748e862aafbeafc981ba80226679beca3d" exitCode=0 Dec 03 14:27:43.612845 master-0 kubenswrapper[4409]: I1203 14:27:43.612755 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1c7a19409131b64a7c59b2108929f94e","Type":"ContainerDied","Data":"0e110ce1ac08bbf01020fd2e8a108c748e862aafbeafc981ba80226679beca3d"} Dec 03 14:27:43.612845 master-0 kubenswrapper[4409]: I1203 14:27:43.612787 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1c7a19409131b64a7c59b2108929f94e","Type":"ContainerStarted","Data":"189de25abbd4121b5733303261e7d0c72a147c32f3e230c401ef279e60dacd12"} Dec 03 14:27:43.613388 master-0 kubenswrapper[4409]: I1203 14:27:43.613325 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:43.613388 master-0 kubenswrapper[4409]: I1203 14:27:43.613383 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:43.614309 master-0 kubenswrapper[4409]: E1203 14:27:43.614246 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:43.614444 master-0 kubenswrapper[4409]: I1203 14:27:43.614324 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:43.615192 master-0 kubenswrapper[4409]: I1203 14:27:43.615000 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:44.624785 master-0 kubenswrapper[4409]: I1203 14:27:44.624696 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1c7a19409131b64a7c59b2108929f94e","Type":"ContainerStarted","Data":"7ce0a4dd19c57bd56c4adeb228a8bf8e724f0a1fecd7eb3e361ff0396ef4196c"} Dec 03 14:27:44.624785 master-0 kubenswrapper[4409]: I1203 14:27:44.624792 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1c7a19409131b64a7c59b2108929f94e","Type":"ContainerStarted","Data":"166052b43852ddf5583cd27464a48563cc65abbde40fae2e7c4572ed6f28a838"} Dec 03 14:27:44.625434 master-0 kubenswrapper[4409]: I1203 14:27:44.624810 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1c7a19409131b64a7c59b2108929f94e","Type":"ContainerStarted","Data":"57ce3fc232f1c2a929c3d5f87d2fec30dfda38c85b8465c95f77e590f351bde2"} Dec 03 14:27:44.625434 master-0 kubenswrapper[4409]: I1203 14:27:44.625107 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:44.625434 master-0 kubenswrapper[4409]: I1203 14:27:44.625203 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:44.625434 master-0 kubenswrapper[4409]: I1203 14:27:44.625226 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:44.626564 master-0 kubenswrapper[4409]: E1203 14:27:44.626491 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:44.626642 master-0 kubenswrapper[4409]: I1203 14:27:44.626505 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:44.627551 master-0 kubenswrapper[4409]: I1203 14:27:44.627484 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:45.632205 master-0 kubenswrapper[4409]: I1203 14:27:45.632113 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:45.632205 master-0 kubenswrapper[4409]: I1203 14:27:45.632164 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:27:45.633501 master-0 kubenswrapper[4409]: E1203 14:27:45.633400 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:27:46.640529 master-0 kubenswrapper[4409]: I1203 14:27:46.640405 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager/2.log" Dec 03 14:27:46.640529 master-0 kubenswrapper[4409]: I1203 14:27:46.640464 4409 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636" exitCode=1 Dec 03 14:27:46.640529 master-0 kubenswrapper[4409]: I1203 14:27:46.640494 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerDied","Data":"0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636"} Dec 03 14:27:46.641200 master-0 kubenswrapper[4409]: I1203 14:27:46.640931 4409 scope.go:117] "RemoveContainer" containerID="0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636" Dec 03 14:27:46.643076 master-0 kubenswrapper[4409]: I1203 14:27:46.642889 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.643546 master-0 kubenswrapper[4409]: I1203 14:27:46.643514 4409 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.644245 master-0 kubenswrapper[4409]: I1203 14:27:46.644211 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.815087 master-0 kubenswrapper[4409]: I1203 14:27:46.814993 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:46.816948 master-0 kubenswrapper[4409]: I1203 14:27:46.816892 4409 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.817797 master-0 kubenswrapper[4409]: I1203 14:27:46.817762 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.818399 master-0 kubenswrapper[4409]: I1203 14:27:46.818367 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:46.841187 master-0 kubenswrapper[4409]: I1203 14:27:46.841140 4409 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:27:46.845925 master-0 kubenswrapper[4409]: I1203 14:27:46.845849 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:46.845925 master-0 kubenswrapper[4409]: I1203 14:27:46.845904 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:46.847161 master-0 kubenswrapper[4409]: E1203 14:27:46.847102 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:46.847783 master-0 kubenswrapper[4409]: I1203 14:27:46.847736 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:46.877717 master-0 kubenswrapper[4409]: W1203 14:27:46.877661 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcee84e50e891b3b96d641a4f9f6a202.slice/crio-3c19d529e4b1ba6af898ac86e233b1896449378570daf029b20d6478af843b07 WatchSource:0}: Error finding container 3c19d529e4b1ba6af898ac86e233b1896449378570daf029b20d6478af843b07: Status 404 returned error can't find the container with id 3c19d529e4b1ba6af898ac86e233b1896449378570daf029b20d6478af843b07 Dec 03 14:27:47.650939 master-0 kubenswrapper[4409]: I1203 14:27:47.650889 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager/2.log" Dec 03 14:27:47.651580 master-0 kubenswrapper[4409]: I1203 14:27:47.650999 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"bf1dbec7c25a38180c3a6691040eb5a8","Type":"ContainerStarted","Data":"3d63c97e8181b8a5a98730093a5cc984581455dd5e6126329dda21ffe29cf740"} Dec 03 14:27:47.652548 master-0 kubenswrapper[4409]: I1203 14:27:47.652509 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.653192 master-0 kubenswrapper[4409]: I1203 14:27:47.653162 4409 generic.go:334] "Generic (PLEG): container finished" podID="dcee84e50e891b3b96d641a4f9f6a202" containerID="e463c29944fb971f88905896f2e8cfcccb9045d01be5d23cf2cc9038a8423e85" exitCode=0 Dec 03 14:27:47.653293 master-0 kubenswrapper[4409]: I1203 14:27:47.653202 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerDied","Data":"e463c29944fb971f88905896f2e8cfcccb9045d01be5d23cf2cc9038a8423e85"} Dec 03 14:27:47.653293 master-0 kubenswrapper[4409]: I1203 14:27:47.653226 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"3c19d529e4b1ba6af898ac86e233b1896449378570daf029b20d6478af843b07"} Dec 03 14:27:47.653293 master-0 kubenswrapper[4409]: I1203 14:27:47.653220 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.653513 master-0 kubenswrapper[4409]: I1203 14:27:47.653464 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:47.653577 master-0 kubenswrapper[4409]: I1203 14:27:47.653518 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:47.655302 master-0 kubenswrapper[4409]: I1203 14:27:47.655233 4409 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.655561 master-0 kubenswrapper[4409]: E1203 14:27:47.655465 4409 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:47.656073 master-0 kubenswrapper[4409]: I1203 14:27:47.656034 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.656552 master-0 kubenswrapper[4409]: I1203 14:27:47.656508 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.657576 master-0 kubenswrapper[4409]: I1203 14:27:47.657086 4409 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.833247 master-0 kubenswrapper[4409]: I1203 14:27:47.833149 4409 status_manager.go:851] "Failed to get status for pod" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.834208 master-0 kubenswrapper[4409]: I1203 14:27:47.834126 4409 status_manager.go:851] "Failed to get status for pod" podUID="1c7a19409131b64a7c59b2108929f94e" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.835037 master-0 kubenswrapper[4409]: I1203 14:27:47.834971 4409 status_manager.go:851] "Failed to get status for pod" podUID="dcee84e50e891b3b96d641a4f9f6a202" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.835752 master-0 kubenswrapper[4409]: I1203 14:27:47.835681 4409 status_manager.go:851] "Failed to get status for pod" podUID="ff63f5c90356b311bfd02f62719f7c37" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:47.836506 master-0 kubenswrapper[4409]: I1203 14:27:47.836411 4409 status_manager.go:851] "Failed to get status for pod" podUID="bf1dbec7c25a38180c3a6691040eb5a8" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Dec 03 14:27:48.447699 master-0 kubenswrapper[4409]: I1203 14:27:48.447580 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:27:48.453406 master-0 kubenswrapper[4409]: I1203 14:27:48.453364 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:27:48.666152 master-0 kubenswrapper[4409]: I1203 14:27:48.666046 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"c1b5f6885c7612a47f89ecf1b5eb852658dbb0394167d4a6e79935e70ec8fd4a"} Dec 03 14:27:48.666152 master-0 kubenswrapper[4409]: I1203 14:27:48.666142 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"d8c73f9c9117e2ed99284f7ab1fb4e93c432de4c33f0c44b76cc98c819675316"} Dec 03 14:27:48.666152 master-0 kubenswrapper[4409]: I1203 14:27:48.666154 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"660a7e8daecb8d48aa3ac0501849a1c55e3cdab2e786bfa047f476cdf31f48fc"} Dec 03 14:27:48.677718 master-0 kubenswrapper[4409]: I1203 14:27:48.666283 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:27:49.677625 master-0 kubenswrapper[4409]: I1203 14:27:49.677447 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"46cc10d677e66238de8738cb9a021fc9ce8769bfa5baf7d8bd981e4dd2767b30"} Dec 03 14:27:49.677625 master-0 kubenswrapper[4409]: I1203 14:27:49.677574 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"dcee84e50e891b3b96d641a4f9f6a202","Type":"ContainerStarted","Data":"2d680d362977f571f20e32ea632cfa124a8efba71100f4088530d74d5e7dd98e"} Dec 03 14:27:49.679135 master-0 kubenswrapper[4409]: I1203 14:27:49.678452 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:49.679135 master-0 kubenswrapper[4409]: I1203 14:27:49.678523 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:51.848266 master-0 kubenswrapper[4409]: I1203 14:27:51.848160 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:51.848266 master-0 kubenswrapper[4409]: I1203 14:27:51.848248 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:51.853869 master-0 kubenswrapper[4409]: I1203 14:27:51.853810 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:55.152169 master-0 kubenswrapper[4409]: I1203 14:27:55.151170 4409 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:55.197033 master-0 kubenswrapper[4409]: I1203 14:27:55.196873 4409 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://660a7e8daecb8d48aa3ac0501849a1c55e3cdab2e786bfa047f476cdf31f48fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1b5f6885c7612a47f89ecf1b5eb852658dbb0394167d4a6e79935e70ec8fd4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c73f9c9117e2ed99284f7ab1fb4e93c432de4c33f0c44b76cc98c819675316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cc10d677e66238de8738cb9a021fc9ce8769bfa5baf7d8bd981e4dd2767b30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d680d362977f571f20e32ea632cfa124a8efba71100f4088530d74d5e7dd98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-master-0\": Pod \"kube-apiserver-master-0\" is invalid: metadata.uid: Invalid value: \"59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d\": field is immutable" Dec 03 14:27:55.727117 master-0 kubenswrapper[4409]: I1203 14:27:55.727057 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:55.727117 master-0 kubenswrapper[4409]: I1203 14:27:55.727115 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:55.727401 master-0 kubenswrapper[4409]: I1203 14:27:55.727135 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:55.731469 master-0 kubenswrapper[4409]: I1203 14:27:55.731441 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:27:55.734892 master-0 kubenswrapper[4409]: I1203 14:27:55.734832 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="dcee84e50e891b3b96d641a4f9f6a202" podUID="3f81ebc5-3e17-4971-b0b1-45a337a2142d" Dec 03 14:27:56.736418 master-0 kubenswrapper[4409]: I1203 14:27:56.736335 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:56.736418 master-0 kubenswrapper[4409]: I1203 14:27:56.736378 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:57.743424 master-0 kubenswrapper[4409]: I1203 14:27:57.743334 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:57.743424 master-0 kubenswrapper[4409]: I1203 14:27:57.743397 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="59f2b4a5-7e6c-48c3-9bb8-ffc9727ddd8d" Dec 03 14:27:57.848777 master-0 kubenswrapper[4409]: I1203 14:27:57.848654 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="dcee84e50e891b3b96d641a4f9f6a202" podUID="3f81ebc5-3e17-4971-b0b1-45a337a2142d" Dec 03 14:28:00.237059 master-0 kubenswrapper[4409]: I1203 14:28:00.236957 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:28:04.244410 master-0 kubenswrapper[4409]: I1203 14:28:04.244314 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Dec 03 14:28:04.509167 master-0 kubenswrapper[4409]: I1203 14:28:04.508886 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Dec 03 14:28:04.872848 master-0 kubenswrapper[4409]: I1203 14:28:04.872756 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:28:04.892648 master-0 kubenswrapper[4409]: I1203 14:28:04.892581 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 03 14:28:04.909137 master-0 kubenswrapper[4409]: I1203 14:28:04.909065 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 03 14:28:05.437868 master-0 kubenswrapper[4409]: I1203 14:28:05.437780 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 03 14:28:05.449192 master-0 kubenswrapper[4409]: I1203 14:28:05.449143 4409 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 03 14:28:05.498252 master-0 kubenswrapper[4409]: I1203 14:28:05.498190 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 03 14:28:05.787697 master-0 kubenswrapper[4409]: I1203 14:28:05.787573 4409 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 03 14:28:05.800671 master-0 kubenswrapper[4409]: I1203 14:28:05.800608 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 03 14:28:06.113621 master-0 kubenswrapper[4409]: I1203 14:28:06.113562 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 03 14:28:06.130266 master-0 kubenswrapper[4409]: I1203 14:28:06.130181 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 03 14:28:06.130497 master-0 kubenswrapper[4409]: I1203 14:28:06.130439 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Dec 03 14:28:06.156418 master-0 kubenswrapper[4409]: I1203 14:28:06.155666 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-33kamir7f7ukf" Dec 03 14:28:06.182295 master-0 kubenswrapper[4409]: I1203 14:28:06.182236 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 03 14:28:06.212661 master-0 kubenswrapper[4409]: I1203 14:28:06.212607 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 03 14:28:06.840860 master-0 kubenswrapper[4409]: I1203 14:28:06.840760 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 03 14:28:06.855304 master-0 kubenswrapper[4409]: I1203 14:28:06.855229 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 03 14:28:06.876541 master-0 kubenswrapper[4409]: I1203 14:28:06.876479 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 03 14:28:06.948889 master-0 kubenswrapper[4409]: I1203 14:28:06.948809 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 03 14:28:07.155544 master-0 kubenswrapper[4409]: I1203 14:28:07.155394 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Dec 03 14:28:07.174031 master-0 kubenswrapper[4409]: I1203 14:28:07.173946 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 03 14:28:07.255685 master-0 kubenswrapper[4409]: I1203 14:28:07.255620 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Dec 03 14:28:07.311895 master-0 kubenswrapper[4409]: I1203 14:28:07.311821 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 03 14:28:07.321045 master-0 kubenswrapper[4409]: I1203 14:28:07.320984 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 03 14:28:07.321812 master-0 kubenswrapper[4409]: I1203 14:28:07.321756 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2bc14vqi7sofg" Dec 03 14:28:07.337824 master-0 kubenswrapper[4409]: I1203 14:28:07.337750 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 03 14:28:07.339722 master-0 kubenswrapper[4409]: I1203 14:28:07.339687 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 03 14:28:07.365216 master-0 kubenswrapper[4409]: I1203 14:28:07.365160 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 03 14:28:07.385252 master-0 kubenswrapper[4409]: I1203 14:28:07.385183 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 03 14:28:07.389129 master-0 kubenswrapper[4409]: I1203 14:28:07.389095 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Dec 03 14:28:07.446017 master-0 kubenswrapper[4409]: I1203 14:28:07.445882 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 03 14:28:07.471087 master-0 kubenswrapper[4409]: I1203 14:28:07.470993 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 03 14:28:07.488456 master-0 kubenswrapper[4409]: I1203 14:28:07.488338 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 03 14:28:07.572397 master-0 kubenswrapper[4409]: I1203 14:28:07.572339 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 03 14:28:07.703923 master-0 kubenswrapper[4409]: I1203 14:28:07.703749 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 03 14:28:07.777474 master-0 kubenswrapper[4409]: I1203 14:28:07.777399 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 03 14:28:07.781190 master-0 kubenswrapper[4409]: I1203 14:28:07.781118 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 03 14:28:07.787241 master-0 kubenswrapper[4409]: I1203 14:28:07.787188 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Dec 03 14:28:07.856456 master-0 kubenswrapper[4409]: I1203 14:28:07.856409 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 03 14:28:07.863047 master-0 kubenswrapper[4409]: E1203 14:28:07.862936 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007\": container with ID starting with c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007 not found: ID does not exist" containerID="c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007" Dec 03 14:28:07.863168 master-0 kubenswrapper[4409]: I1203 14:28:07.863066 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007" err="rpc error: code = NotFound desc = could not find container \"c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007\": container with ID starting with c0393e96102741fb863f341a21d99656878ac51ca2055269994162a7cd342007 not found: ID does not exist" Dec 03 14:28:07.863770 master-0 kubenswrapper[4409]: E1203 14:28:07.863707 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3\": container with ID starting with 7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3 not found: ID does not exist" containerID="7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3" Dec 03 14:28:07.863854 master-0 kubenswrapper[4409]: I1203 14:28:07.863771 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3" err="rpc error: code = NotFound desc = could not find container \"7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3\": container with ID starting with 7d1468befd711fc502813be0ea0f005aea030804dcd1f8549337546e91f235b3 not found: ID does not exist" Dec 03 14:28:07.864210 master-0 kubenswrapper[4409]: E1203 14:28:07.864167 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4\": container with ID starting with c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4 not found: ID does not exist" containerID="c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4" Dec 03 14:28:07.864284 master-0 kubenswrapper[4409]: I1203 14:28:07.864213 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4" err="rpc error: code = NotFound desc = could not find container \"c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4\": container with ID starting with c9a5b1b142383e7901debd94f1a5d96df47b004ed2a3852448e63a1d85c29fe4 not found: ID does not exist" Dec 03 14:28:07.864695 master-0 kubenswrapper[4409]: E1203 14:28:07.864647 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc\": container with ID starting with a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc not found: ID does not exist" containerID="a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc" Dec 03 14:28:07.864695 master-0 kubenswrapper[4409]: I1203 14:28:07.864683 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc" err="rpc error: code = NotFound desc = could not find container \"a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc\": container with ID starting with a8ab550656033b3f4a20406b251adc31c4da7264f3e9696691f7c79c2e4bf6dc not found: ID does not exist" Dec 03 14:28:07.865053 master-0 kubenswrapper[4409]: E1203 14:28:07.864980 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01\": container with ID starting with b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01 not found: ID does not exist" containerID="b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01" Dec 03 14:28:07.865120 master-0 kubenswrapper[4409]: I1203 14:28:07.865061 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01" err="rpc error: code = NotFound desc = could not find container \"b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01\": container with ID starting with b874eb089d31bbcfe7a7cb9e9c171e4ea69e6aff68b4d7cebe5b7ff632601d01 not found: ID does not exist" Dec 03 14:28:07.865446 master-0 kubenswrapper[4409]: E1203 14:28:07.865412 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a\": container with ID starting with 5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a not found: ID does not exist" containerID="5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a" Dec 03 14:28:07.865492 master-0 kubenswrapper[4409]: I1203 14:28:07.865441 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a" err="rpc error: code = NotFound desc = could not find container \"5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a\": container with ID starting with 5f2ad3a45d00834232f07b2365561cc0a170b2213488941c75a2b00d8543a38a not found: ID does not exist" Dec 03 14:28:07.867500 master-0 kubenswrapper[4409]: E1203 14:28:07.867464 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316\": container with ID starting with 4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316 not found: ID does not exist" containerID="4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316" Dec 03 14:28:07.867500 master-0 kubenswrapper[4409]: I1203 14:28:07.867493 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316" err="rpc error: code = NotFound desc = could not find container \"4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316\": container with ID starting with 4a9dcdc34703d9691b8461b50351e971eab2ca17cdb0b90e438c72f984cda316 not found: ID does not exist" Dec 03 14:28:07.867800 master-0 kubenswrapper[4409]: E1203 14:28:07.867767 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12\": container with ID starting with 83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12 not found: ID does not exist" containerID="83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12" Dec 03 14:28:07.867875 master-0 kubenswrapper[4409]: I1203 14:28:07.867856 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12" err="rpc error: code = NotFound desc = could not find container \"83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12\": container with ID starting with 83a1133cd911735d04a6b10e35f33fbbd1048db40dbca3ff9417a8e7c4cb2f12 not found: ID does not exist" Dec 03 14:28:07.868911 master-0 kubenswrapper[4409]: E1203 14:28:07.868870 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f\": container with ID starting with 767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f not found: ID does not exist" containerID="767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f" Dec 03 14:28:07.868977 master-0 kubenswrapper[4409]: I1203 14:28:07.868914 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f" err="rpc error: code = NotFound desc = could not find container \"767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f\": container with ID starting with 767e093bde08945e423f4b2f1823ed80e24d9884bf50de4c2f350ff46cbfab6f not found: ID does not exist" Dec 03 14:28:07.869453 master-0 kubenswrapper[4409]: E1203 14:28:07.869421 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c\": container with ID starting with a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c not found: ID does not exist" containerID="a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c" Dec 03 14:28:07.869497 master-0 kubenswrapper[4409]: I1203 14:28:07.869449 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c" err="rpc error: code = NotFound desc = could not find container \"a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c\": container with ID starting with a07f8950d26c395a492502a951ac95712e6d665dd6b02922cef6364d6239c25c not found: ID does not exist" Dec 03 14:28:07.869831 master-0 kubenswrapper[4409]: E1203 14:28:07.869796 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1\": container with ID starting with a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1 not found: ID does not exist" containerID="a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1" Dec 03 14:28:07.869876 master-0 kubenswrapper[4409]: I1203 14:28:07.869826 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1" err="rpc error: code = NotFound desc = could not find container \"a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1\": container with ID starting with a17626298ce098bf95d54b05edf3d2f2232deff9afd700084a48b88798d2d6b1 not found: ID does not exist" Dec 03 14:28:07.870590 master-0 kubenswrapper[4409]: E1203 14:28:07.870539 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4\": container with ID starting with fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4 not found: ID does not exist" containerID="fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4" Dec 03 14:28:07.870590 master-0 kubenswrapper[4409]: I1203 14:28:07.870564 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4" err="rpc error: code = NotFound desc = could not find container \"fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4\": container with ID starting with fb6b5740563c08a3ec924e8967c1430f046df15559c05d652624342cfceab2e4 not found: ID does not exist" Dec 03 14:28:07.871427 master-0 kubenswrapper[4409]: E1203 14:28:07.871399 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3\": container with ID starting with a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3 not found: ID does not exist" containerID="a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3" Dec 03 14:28:07.871482 master-0 kubenswrapper[4409]: I1203 14:28:07.871425 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3" err="rpc error: code = NotFound desc = could not find container \"a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3\": container with ID starting with a456cc2a1887df37d521bf810b3a5c64a6e76efbb641dcc27c712724dadb49e3 not found: ID does not exist" Dec 03 14:28:07.871902 master-0 kubenswrapper[4409]: E1203 14:28:07.871870 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2\": container with ID starting with dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2 not found: ID does not exist" containerID="dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2" Dec 03 14:28:07.871949 master-0 kubenswrapper[4409]: I1203 14:28:07.871897 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2" err="rpc error: code = NotFound desc = could not find container \"dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2\": container with ID starting with dc21bb45ba1c8ff67190beeaf9e5c9882ec468c3544861a6043f0165b0f5a5a2 not found: ID does not exist" Dec 03 14:28:07.872404 master-0 kubenswrapper[4409]: E1203 14:28:07.872374 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd\": container with ID starting with 2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd not found: ID does not exist" containerID="2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd" Dec 03 14:28:07.872448 master-0 kubenswrapper[4409]: I1203 14:28:07.872400 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd" err="rpc error: code = NotFound desc = could not find container \"2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd\": container with ID starting with 2720ac4e136c516ade031cc9f5f2eb784e6eea84614c79510b3a8681a11ebffd not found: ID does not exist" Dec 03 14:28:07.873253 master-0 kubenswrapper[4409]: E1203 14:28:07.873224 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f\": container with ID starting with d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f not found: ID does not exist" containerID="d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f" Dec 03 14:28:07.873301 master-0 kubenswrapper[4409]: I1203 14:28:07.873250 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f" err="rpc error: code = NotFound desc = could not find container \"d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f\": container with ID starting with d2912d0cf398123c2798e5c4ba95e960e81a8e3d575a43c87dc45dee7d34180f not found: ID does not exist" Dec 03 14:28:07.910760 master-0 kubenswrapper[4409]: I1203 14:28:07.910691 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 03 14:28:07.916412 master-0 kubenswrapper[4409]: I1203 14:28:07.916342 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 03 14:28:07.962617 master-0 kubenswrapper[4409]: I1203 14:28:07.962295 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 03 14:28:07.988618 master-0 kubenswrapper[4409]: I1203 14:28:07.988501 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 03 14:28:08.012885 master-0 kubenswrapper[4409]: I1203 14:28:08.012781 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-jmtqw" Dec 03 14:28:08.172686 master-0 kubenswrapper[4409]: I1203 14:28:08.172607 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 03 14:28:08.215924 master-0 kubenswrapper[4409]: I1203 14:28:08.215735 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 03 14:28:08.218853 master-0 kubenswrapper[4409]: I1203 14:28:08.218785 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 03 14:28:08.333113 master-0 kubenswrapper[4409]: I1203 14:28:08.332989 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 03 14:28:08.341761 master-0 kubenswrapper[4409]: I1203 14:28:08.341727 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-56c9b9fa8d9gs" Dec 03 14:28:08.394220 master-0 kubenswrapper[4409]: I1203 14:28:08.394169 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 03 14:28:08.396110 master-0 kubenswrapper[4409]: I1203 14:28:08.396080 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 03 14:28:08.408874 master-0 kubenswrapper[4409]: I1203 14:28:08.408813 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Dec 03 14:28:08.450803 master-0 kubenswrapper[4409]: I1203 14:28:08.450723 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 03 14:28:08.473141 master-0 kubenswrapper[4409]: I1203 14:28:08.472961 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 03 14:28:08.525138 master-0 kubenswrapper[4409]: I1203 14:28:08.525072 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 03 14:28:08.542176 master-0 kubenswrapper[4409]: I1203 14:28:08.542130 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 03 14:28:08.609918 master-0 kubenswrapper[4409]: I1203 14:28:08.609801 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 03 14:28:08.776304 master-0 kubenswrapper[4409]: I1203 14:28:08.776156 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 03 14:28:08.784146 master-0 kubenswrapper[4409]: I1203 14:28:08.784100 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 03 14:28:08.791804 master-0 kubenswrapper[4409]: I1203 14:28:08.791773 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Dec 03 14:28:08.816117 master-0 kubenswrapper[4409]: I1203 14:28:08.815984 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 03 14:28:08.821768 master-0 kubenswrapper[4409]: I1203 14:28:08.821697 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 03 14:28:08.889120 master-0 kubenswrapper[4409]: I1203 14:28:08.888990 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 03 14:28:08.901324 master-0 kubenswrapper[4409]: I1203 14:28:08.901265 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 03 14:28:08.958998 master-0 kubenswrapper[4409]: I1203 14:28:08.958931 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 03 14:28:09.085344 master-0 kubenswrapper[4409]: I1203 14:28:09.085266 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 03 14:28:09.109968 master-0 kubenswrapper[4409]: I1203 14:28:09.109886 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8wv68" Dec 03 14:28:09.145279 master-0 kubenswrapper[4409]: I1203 14:28:09.145199 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 03 14:28:09.232627 master-0 kubenswrapper[4409]: I1203 14:28:09.232544 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 03 14:28:09.257400 master-0 kubenswrapper[4409]: I1203 14:28:09.257305 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Dec 03 14:28:09.264611 master-0 kubenswrapper[4409]: I1203 14:28:09.264551 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 03 14:28:09.406088 master-0 kubenswrapper[4409]: I1203 14:28:09.405842 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-9rqxl" Dec 03 14:28:09.426389 master-0 kubenswrapper[4409]: I1203 14:28:09.426315 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 03 14:28:09.521236 master-0 kubenswrapper[4409]: I1203 14:28:09.521136 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 03 14:28:09.554723 master-0 kubenswrapper[4409]: I1203 14:28:09.554629 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 03 14:28:09.603933 master-0 kubenswrapper[4409]: I1203 14:28:09.603840 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-2d5p6" Dec 03 14:28:09.605576 master-0 kubenswrapper[4409]: I1203 14:28:09.605517 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 03 14:28:09.631426 master-0 kubenswrapper[4409]: I1203 14:28:09.631345 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 03 14:28:09.638138 master-0 kubenswrapper[4409]: I1203 14:28:09.638078 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 03 14:28:09.679002 master-0 kubenswrapper[4409]: I1203 14:28:09.678808 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 03 14:28:09.704831 master-0 kubenswrapper[4409]: I1203 14:28:09.704650 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Dec 03 14:28:09.728517 master-0 kubenswrapper[4409]: I1203 14:28:09.728440 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 03 14:28:09.837558 master-0 kubenswrapper[4409]: I1203 14:28:09.837453 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 03 14:28:09.858899 master-0 kubenswrapper[4409]: I1203 14:28:09.858833 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 03 14:28:09.862748 master-0 kubenswrapper[4409]: I1203 14:28:09.862701 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 03 14:28:09.881793 master-0 kubenswrapper[4409]: I1203 14:28:09.881736 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 03 14:28:09.921338 master-0 kubenswrapper[4409]: I1203 14:28:09.921266 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 03 14:28:09.940441 master-0 kubenswrapper[4409]: I1203 14:28:09.940291 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 03 14:28:10.062246 master-0 kubenswrapper[4409]: I1203 14:28:10.062149 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 03 14:28:10.098685 master-0 kubenswrapper[4409]: I1203 14:28:10.098604 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 03 14:28:10.139693 master-0 kubenswrapper[4409]: I1203 14:28:10.139620 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-n8h5v" Dec 03 14:28:10.169770 master-0 kubenswrapper[4409]: I1203 14:28:10.169671 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 03 14:28:10.378397 master-0 kubenswrapper[4409]: I1203 14:28:10.378292 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 03 14:28:10.430626 master-0 kubenswrapper[4409]: I1203 14:28:10.430533 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 03 14:28:10.505451 master-0 kubenswrapper[4409]: I1203 14:28:10.505166 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 03 14:28:10.525423 master-0 kubenswrapper[4409]: I1203 14:28:10.525180 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Dec 03 14:28:10.545246 master-0 kubenswrapper[4409]: I1203 14:28:10.545182 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 03 14:28:10.615244 master-0 kubenswrapper[4409]: I1203 14:28:10.615197 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-2blfd" Dec 03 14:28:10.624406 master-0 kubenswrapper[4409]: I1203 14:28:10.624353 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 03 14:28:10.629570 master-0 kubenswrapper[4409]: I1203 14:28:10.629491 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 03 14:28:10.634809 master-0 kubenswrapper[4409]: I1203 14:28:10.634730 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 03 14:28:10.644360 master-0 kubenswrapper[4409]: I1203 14:28:10.644322 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 03 14:28:10.650608 master-0 kubenswrapper[4409]: I1203 14:28:10.650546 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 03 14:28:10.686242 master-0 kubenswrapper[4409]: I1203 14:28:10.686195 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 03 14:28:10.706924 master-0 kubenswrapper[4409]: I1203 14:28:10.706894 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Dec 03 14:28:10.707773 master-0 kubenswrapper[4409]: I1203 14:28:10.707750 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 03 14:28:10.767044 master-0 kubenswrapper[4409]: I1203 14:28:10.766947 4409 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 03 14:28:10.767877 master-0 kubenswrapper[4409]: I1203 14:28:10.767779 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=38.767755568 podStartE2EDuration="38.767755568s" podCreationTimestamp="2025-12-03 14:27:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:27:54.381297629 +0000 UTC m=+106.708360155" watchObservedRunningTime="2025-12-03 14:28:10.767755568 +0000 UTC m=+123.094818074" Dec 03 14:28:10.775438 master-0 kubenswrapper[4409]: I1203 14:28:10.775369 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:28:10.775438 master-0 kubenswrapper[4409]: I1203 14:28:10.775440 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Dec 03 14:28:10.780632 master-0 kubenswrapper[4409]: I1203 14:28:10.780581 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Dec 03 14:28:10.787987 master-0 kubenswrapper[4409]: I1203 14:28:10.787954 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 03 14:28:10.790548 master-0 kubenswrapper[4409]: I1203 14:28:10.790494 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 03 14:28:10.811405 master-0 kubenswrapper[4409]: I1203 14:28:10.811322 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=15.811296533 podStartE2EDuration="15.811296533s" podCreationTimestamp="2025-12-03 14:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:28:10.803899165 +0000 UTC m=+123.130961681" watchObservedRunningTime="2025-12-03 14:28:10.811296533 +0000 UTC m=+123.138359069" Dec 03 14:28:10.851534 master-0 kubenswrapper[4409]: I1203 14:28:10.851461 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-c24sh" Dec 03 14:28:10.865200 master-0 kubenswrapper[4409]: I1203 14:28:10.865102 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 03 14:28:10.914403 master-0 kubenswrapper[4409]: I1203 14:28:10.914258 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Dec 03 14:28:10.920426 master-0 kubenswrapper[4409]: I1203 14:28:10.920364 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Dec 03 14:28:10.936621 master-0 kubenswrapper[4409]: I1203 14:28:10.936576 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Dec 03 14:28:10.978826 master-0 kubenswrapper[4409]: I1203 14:28:10.978724 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-twpdm" Dec 03 14:28:11.014480 master-0 kubenswrapper[4409]: I1203 14:28:11.014411 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 03 14:28:11.079975 master-0 kubenswrapper[4409]: I1203 14:28:11.079905 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-w2dfx" Dec 03 14:28:11.082946 master-0 kubenswrapper[4409]: I1203 14:28:11.082916 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m5v4g" Dec 03 14:28:11.156426 master-0 kubenswrapper[4409]: I1203 14:28:11.156369 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Dec 03 14:28:11.191146 master-0 kubenswrapper[4409]: I1203 14:28:11.191029 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 03 14:28:11.228998 master-0 kubenswrapper[4409]: I1203 14:28:11.228956 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 03 14:28:11.331800 master-0 kubenswrapper[4409]: I1203 14:28:11.331414 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 03 14:28:11.365880 master-0 kubenswrapper[4409]: I1203 14:28:11.363786 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 03 14:28:11.388465 master-0 kubenswrapper[4409]: I1203 14:28:11.388389 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Dec 03 14:28:11.468920 master-0 kubenswrapper[4409]: I1203 14:28:11.468762 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 03 14:28:11.543168 master-0 kubenswrapper[4409]: I1203 14:28:11.543078 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 03 14:28:11.557647 master-0 kubenswrapper[4409]: I1203 14:28:11.557567 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 03 14:28:11.604411 master-0 kubenswrapper[4409]: I1203 14:28:11.604329 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 03 14:28:11.636242 master-0 kubenswrapper[4409]: I1203 14:28:11.636154 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 03 14:28:11.828782 master-0 kubenswrapper[4409]: I1203 14:28:11.828343 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 03 14:28:11.828782 master-0 kubenswrapper[4409]: I1203 14:28:11.828378 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 03 14:28:11.828782 master-0 kubenswrapper[4409]: I1203 14:28:11.828815 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 03 14:28:11.829330 master-0 kubenswrapper[4409]: I1203 14:28:11.829281 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-vpms9" Dec 03 14:28:11.829330 master-0 kubenswrapper[4409]: I1203 14:28:11.829297 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 03 14:28:11.859487 master-0 kubenswrapper[4409]: I1203 14:28:11.859433 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 03 14:28:11.919812 master-0 kubenswrapper[4409]: I1203 14:28:11.919749 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 03 14:28:12.010603 master-0 kubenswrapper[4409]: I1203 14:28:12.010447 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gdnn5" Dec 03 14:28:12.084827 master-0 kubenswrapper[4409]: I1203 14:28:12.084657 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 03 14:28:12.113553 master-0 kubenswrapper[4409]: I1203 14:28:12.113429 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 03 14:28:12.133020 master-0 kubenswrapper[4409]: I1203 14:28:12.132919 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 03 14:28:12.176179 master-0 kubenswrapper[4409]: I1203 14:28:12.176096 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 03 14:28:12.230095 master-0 kubenswrapper[4409]: I1203 14:28:12.230025 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 03 14:28:12.276560 master-0 kubenswrapper[4409]: I1203 14:28:12.276501 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Dec 03 14:28:12.281909 master-0 kubenswrapper[4409]: I1203 14:28:12.281844 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 03 14:28:12.285754 master-0 kubenswrapper[4409]: I1203 14:28:12.285696 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-g5njm" Dec 03 14:28:12.307267 master-0 kubenswrapper[4409]: I1203 14:28:12.307161 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 03 14:28:12.446838 master-0 kubenswrapper[4409]: I1203 14:28:12.446670 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 03 14:28:12.455991 master-0 kubenswrapper[4409]: I1203 14:28:12.455912 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 03 14:28:12.490809 master-0 kubenswrapper[4409]: I1203 14:28:12.490698 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 03 14:28:12.558626 master-0 kubenswrapper[4409]: I1203 14:28:12.558545 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 03 14:28:12.576300 master-0 kubenswrapper[4409]: I1203 14:28:12.576198 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Dec 03 14:28:12.662934 master-0 kubenswrapper[4409]: I1203 14:28:12.662876 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:28:12.753192 master-0 kubenswrapper[4409]: I1203 14:28:12.752946 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 03 14:28:12.788105 master-0 kubenswrapper[4409]: I1203 14:28:12.788032 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 03 14:28:12.842822 master-0 kubenswrapper[4409]: I1203 14:28:12.842702 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Dec 03 14:28:12.854146 master-0 kubenswrapper[4409]: I1203 14:28:12.854078 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 03 14:28:12.889817 master-0 kubenswrapper[4409]: I1203 14:28:12.885864 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-59f99" Dec 03 14:28:12.906044 master-0 kubenswrapper[4409]: I1203 14:28:12.905761 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Dec 03 14:28:12.906340 master-0 kubenswrapper[4409]: I1203 14:28:12.906118 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 03 14:28:12.906340 master-0 kubenswrapper[4409]: I1203 14:28:12.906269 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 03 14:28:12.939043 master-0 kubenswrapper[4409]: I1203 14:28:12.936782 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 03 14:28:12.974789 master-0 kubenswrapper[4409]: I1203 14:28:12.974694 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 03 14:28:13.042305 master-0 kubenswrapper[4409]: I1203 14:28:13.042175 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 03 14:28:13.044241 master-0 kubenswrapper[4409]: I1203 14:28:13.044212 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 03 14:28:13.102326 master-0 kubenswrapper[4409]: I1203 14:28:13.102268 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 03 14:28:13.143363 master-0 kubenswrapper[4409]: I1203 14:28:13.143292 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 03 14:28:13.198361 master-0 kubenswrapper[4409]: I1203 14:28:13.198312 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-wksjv" Dec 03 14:28:13.280677 master-0 kubenswrapper[4409]: I1203 14:28:13.280620 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:28:13.297079 master-0 kubenswrapper[4409]: I1203 14:28:13.296934 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 03 14:28:13.333524 master-0 kubenswrapper[4409]: I1203 14:28:13.333465 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 03 14:28:13.413708 master-0 kubenswrapper[4409]: I1203 14:28:13.413648 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 03 14:28:13.422059 master-0 kubenswrapper[4409]: I1203 14:28:13.421998 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 03 14:28:13.438992 master-0 kubenswrapper[4409]: I1203 14:28:13.438939 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 03 14:28:13.456845 master-0 kubenswrapper[4409]: I1203 14:28:13.456784 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 03 14:28:13.461903 master-0 kubenswrapper[4409]: I1203 14:28:13.461842 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-rmhwz" Dec 03 14:28:13.484178 master-0 kubenswrapper[4409]: I1203 14:28:13.484118 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 03 14:28:13.484510 master-0 kubenswrapper[4409]: I1203 14:28:13.484431 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 03 14:28:13.512696 master-0 kubenswrapper[4409]: I1203 14:28:13.512645 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 03 14:28:13.524021 master-0 kubenswrapper[4409]: I1203 14:28:13.523965 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 03 14:28:13.524233 master-0 kubenswrapper[4409]: I1203 14:28:13.524213 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wp55d" Dec 03 14:28:13.590205 master-0 kubenswrapper[4409]: I1203 14:28:13.590155 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 03 14:28:13.601325 master-0 kubenswrapper[4409]: I1203 14:28:13.601271 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-glhsw" Dec 03 14:28:13.614341 master-0 kubenswrapper[4409]: I1203 14:28:13.614289 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Dec 03 14:28:13.648261 master-0 kubenswrapper[4409]: I1203 14:28:13.648180 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 03 14:28:13.687604 master-0 kubenswrapper[4409]: I1203 14:28:13.687528 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 03 14:28:13.716744 master-0 kubenswrapper[4409]: I1203 14:28:13.716706 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 03 14:28:13.719575 master-0 kubenswrapper[4409]: I1203 14:28:13.719549 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 03 14:28:13.770354 master-0 kubenswrapper[4409]: I1203 14:28:13.770277 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 03 14:28:13.827649 master-0 kubenswrapper[4409]: I1203 14:28:13.827600 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 03 14:28:13.849868 master-0 kubenswrapper[4409]: I1203 14:28:13.849762 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 03 14:28:13.870581 master-0 kubenswrapper[4409]: I1203 14:28:13.870530 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 03 14:28:13.885077 master-0 kubenswrapper[4409]: I1203 14:28:13.885035 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Dec 03 14:28:13.914698 master-0 kubenswrapper[4409]: I1203 14:28:13.914649 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-qsbb9" Dec 03 14:28:14.081438 master-0 kubenswrapper[4409]: I1203 14:28:14.081374 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 03 14:28:14.251744 master-0 kubenswrapper[4409]: I1203 14:28:14.251628 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-v4qp8" Dec 03 14:28:14.271867 master-0 kubenswrapper[4409]: I1203 14:28:14.271817 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 03 14:28:14.301597 master-0 kubenswrapper[4409]: I1203 14:28:14.301543 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Dec 03 14:28:14.436228 master-0 kubenswrapper[4409]: I1203 14:28:14.436146 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 03 14:28:14.453656 master-0 kubenswrapper[4409]: I1203 14:28:14.453592 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nqkqh" Dec 03 14:28:14.485876 master-0 kubenswrapper[4409]: I1203 14:28:14.485811 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 03 14:28:14.528404 master-0 kubenswrapper[4409]: I1203 14:28:14.528198 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 03 14:28:14.541912 master-0 kubenswrapper[4409]: I1203 14:28:14.541835 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Dec 03 14:28:14.653591 master-0 kubenswrapper[4409]: I1203 14:28:14.653523 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-fwsd5" Dec 03 14:28:14.723105 master-0 kubenswrapper[4409]: I1203 14:28:14.723048 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 03 14:28:14.732648 master-0 kubenswrapper[4409]: I1203 14:28:14.732602 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Dec 03 14:28:14.747295 master-0 kubenswrapper[4409]: I1203 14:28:14.747207 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 03 14:28:14.756218 master-0 kubenswrapper[4409]: I1203 14:28:14.756165 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-l6rgr" Dec 03 14:28:14.771267 master-0 kubenswrapper[4409]: I1203 14:28:14.771207 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 03 14:28:14.986762 master-0 kubenswrapper[4409]: I1203 14:28:14.986685 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 03 14:28:15.019603 master-0 kubenswrapper[4409]: I1203 14:28:15.019526 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 03 14:28:15.042892 master-0 kubenswrapper[4409]: I1203 14:28:15.042824 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 03 14:28:15.083651 master-0 kubenswrapper[4409]: I1203 14:28:15.083579 4409 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 03 14:28:15.088420 master-0 kubenswrapper[4409]: I1203 14:28:15.088402 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 03 14:28:15.165835 master-0 kubenswrapper[4409]: I1203 14:28:15.165780 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 03 14:28:15.185330 master-0 kubenswrapper[4409]: I1203 14:28:15.185285 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Dec 03 14:28:15.186817 master-0 kubenswrapper[4409]: I1203 14:28:15.186755 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 03 14:28:15.234802 master-0 kubenswrapper[4409]: I1203 14:28:15.234722 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 03 14:28:15.282385 master-0 kubenswrapper[4409]: I1203 14:28:15.282139 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 03 14:28:15.323032 master-0 kubenswrapper[4409]: I1203 14:28:15.322937 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 03 14:28:15.448336 master-0 kubenswrapper[4409]: I1203 14:28:15.448263 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 03 14:28:15.471616 master-0 kubenswrapper[4409]: I1203 14:28:15.471551 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Dec 03 14:28:15.474073 master-0 kubenswrapper[4409]: I1203 14:28:15.473992 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Dec 03 14:28:15.520035 master-0 kubenswrapper[4409]: I1203 14:28:15.519919 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 03 14:28:15.520574 master-0 kubenswrapper[4409]: I1203 14:28:15.520473 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 03 14:28:15.526860 master-0 kubenswrapper[4409]: I1203 14:28:15.526789 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 03 14:28:15.608313 master-0 kubenswrapper[4409]: I1203 14:28:15.608228 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 03 14:28:15.622065 master-0 kubenswrapper[4409]: I1203 14:28:15.621959 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 03 14:28:15.767534 master-0 kubenswrapper[4409]: I1203 14:28:15.767424 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 03 14:28:15.819902 master-0 kubenswrapper[4409]: I1203 14:28:15.819828 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 03 14:28:15.823893 master-0 kubenswrapper[4409]: I1203 14:28:15.823857 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 03 14:28:15.876946 master-0 kubenswrapper[4409]: I1203 14:28:15.876770 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 03 14:28:15.897154 master-0 kubenswrapper[4409]: I1203 14:28:15.897075 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 03 14:28:15.982341 master-0 kubenswrapper[4409]: I1203 14:28:15.978189 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 03 14:28:15.989956 master-0 kubenswrapper[4409]: I1203 14:28:15.988330 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 03 14:28:15.994848 master-0 kubenswrapper[4409]: I1203 14:28:15.994790 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 03 14:28:16.135625 master-0 kubenswrapper[4409]: I1203 14:28:16.135284 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 03 14:28:16.139824 master-0 kubenswrapper[4409]: I1203 14:28:16.139774 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 03 14:28:16.246622 master-0 kubenswrapper[4409]: I1203 14:28:16.246549 4409 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 03 14:28:16.257852 master-0 kubenswrapper[4409]: I1203 14:28:16.257803 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Dec 03 14:28:16.277381 master-0 kubenswrapper[4409]: I1203 14:28:16.277324 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 03 14:28:16.301664 master-0 kubenswrapper[4409]: I1203 14:28:16.301591 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 03 14:28:16.331158 master-0 kubenswrapper[4409]: I1203 14:28:16.331066 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 03 14:28:16.387296 master-0 kubenswrapper[4409]: I1203 14:28:16.387147 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8ekn1l23o09kv" Dec 03 14:28:16.403046 master-0 kubenswrapper[4409]: I1203 14:28:16.402976 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 03 14:28:16.441957 master-0 kubenswrapper[4409]: I1203 14:28:16.441890 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 03 14:28:16.459252 master-0 kubenswrapper[4409]: I1203 14:28:16.459197 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-xw6t4" Dec 03 14:28:16.680188 master-0 kubenswrapper[4409]: I1203 14:28:16.679985 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 03 14:28:16.768648 master-0 kubenswrapper[4409]: I1203 14:28:16.768583 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Dec 03 14:28:16.770170 master-0 kubenswrapper[4409]: I1203 14:28:16.770123 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Dec 03 14:28:16.788228 master-0 kubenswrapper[4409]: I1203 14:28:16.788179 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 03 14:28:16.798419 master-0 kubenswrapper[4409]: I1203 14:28:16.798315 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 03 14:28:16.864116 master-0 kubenswrapper[4409]: I1203 14:28:16.864045 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 03 14:28:16.972646 master-0 kubenswrapper[4409]: I1203 14:28:16.972521 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-prvgv" Dec 03 14:28:16.981255 master-0 kubenswrapper[4409]: I1203 14:28:16.981212 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 03 14:28:17.002807 master-0 kubenswrapper[4409]: I1203 14:28:17.002755 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:28:17.033566 master-0 kubenswrapper[4409]: I1203 14:28:17.033503 4409 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:28:17.033856 master-0 kubenswrapper[4409]: I1203 14:28:17.033812 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ff63f5c90356b311bfd02f62719f7c37" containerName="startup-monitor" containerID="cri-o://7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7" gracePeriod=5 Dec 03 14:28:17.059279 master-0 kubenswrapper[4409]: I1203 14:28:17.059217 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-js47f" Dec 03 14:28:17.100105 master-0 kubenswrapper[4409]: I1203 14:28:17.100041 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 03 14:28:17.125606 master-0 kubenswrapper[4409]: I1203 14:28:17.125514 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 03 14:28:17.277371 master-0 kubenswrapper[4409]: I1203 14:28:17.277225 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 03 14:28:17.360929 master-0 kubenswrapper[4409]: I1203 14:28:17.360860 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 03 14:28:17.387717 master-0 kubenswrapper[4409]: I1203 14:28:17.387634 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Dec 03 14:28:17.555109 master-0 kubenswrapper[4409]: I1203 14:28:17.555068 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-bdlwz" Dec 03 14:28:17.580469 master-0 kubenswrapper[4409]: I1203 14:28:17.580412 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 03 14:28:17.673659 master-0 kubenswrapper[4409]: I1203 14:28:17.673599 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-7n524" Dec 03 14:28:17.772189 master-0 kubenswrapper[4409]: I1203 14:28:17.772131 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 03 14:28:17.818442 master-0 kubenswrapper[4409]: I1203 14:28:17.816360 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 03 14:28:17.895919 master-0 kubenswrapper[4409]: I1203 14:28:17.895865 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 03 14:28:17.896461 master-0 kubenswrapper[4409]: I1203 14:28:17.896433 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-8zh52" Dec 03 14:28:17.923105 master-0 kubenswrapper[4409]: I1203 14:28:17.922971 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-cb9jg" Dec 03 14:28:18.024732 master-0 kubenswrapper[4409]: I1203 14:28:18.024673 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Dec 03 14:28:18.063342 master-0 kubenswrapper[4409]: I1203 14:28:18.063278 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 03 14:28:18.078169 master-0 kubenswrapper[4409]: I1203 14:28:18.077977 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Dec 03 14:28:18.126289 master-0 kubenswrapper[4409]: I1203 14:28:18.126238 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 03 14:28:18.127807 master-0 kubenswrapper[4409]: I1203 14:28:18.126912 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 03 14:28:18.130811 master-0 kubenswrapper[4409]: I1203 14:28:18.130782 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 03 14:28:18.209753 master-0 kubenswrapper[4409]: I1203 14:28:18.209690 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 03 14:28:18.242364 master-0 kubenswrapper[4409]: I1203 14:28:18.242297 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 03 14:28:18.390073 master-0 kubenswrapper[4409]: I1203 14:28:18.389894 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Dec 03 14:28:18.508804 master-0 kubenswrapper[4409]: I1203 14:28:18.508738 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 03 14:28:18.638246 master-0 kubenswrapper[4409]: I1203 14:28:18.638192 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 03 14:28:18.693420 master-0 kubenswrapper[4409]: I1203 14:28:18.693284 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 03 14:28:18.861903 master-0 kubenswrapper[4409]: I1203 14:28:18.861845 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Dec 03 14:28:18.933344 master-0 kubenswrapper[4409]: I1203 14:28:18.933267 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 03 14:28:19.092949 master-0 kubenswrapper[4409]: I1203 14:28:19.092900 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Dec 03 14:28:19.093202 master-0 kubenswrapper[4409]: I1203 14:28:19.093085 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-cqsrd" Dec 03 14:28:19.124454 master-0 kubenswrapper[4409]: I1203 14:28:19.124350 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-2fgkw" Dec 03 14:28:19.138989 master-0 kubenswrapper[4409]: I1203 14:28:19.138917 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 03 14:28:19.149040 master-0 kubenswrapper[4409]: I1203 14:28:19.148986 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 03 14:28:19.246559 master-0 kubenswrapper[4409]: I1203 14:28:19.246480 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 03 14:28:19.400041 master-0 kubenswrapper[4409]: I1203 14:28:19.399919 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 03 14:28:19.454806 master-0 kubenswrapper[4409]: I1203 14:28:19.454738 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 03 14:28:19.510430 master-0 kubenswrapper[4409]: I1203 14:28:19.510377 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 03 14:28:19.526914 master-0 kubenswrapper[4409]: I1203 14:28:19.526866 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 03 14:28:19.534039 master-0 kubenswrapper[4409]: I1203 14:28:19.533986 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 03 14:28:19.555436 master-0 kubenswrapper[4409]: I1203 14:28:19.555349 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 03 14:28:19.694777 master-0 kubenswrapper[4409]: I1203 14:28:19.694624 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Dec 03 14:28:19.716269 master-0 kubenswrapper[4409]: I1203 14:28:19.716222 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 03 14:28:19.822334 master-0 kubenswrapper[4409]: I1203 14:28:19.822288 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 03 14:28:19.885025 master-0 kubenswrapper[4409]: I1203 14:28:19.884954 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 03 14:28:19.953080 master-0 kubenswrapper[4409]: I1203 14:28:19.952962 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 03 14:28:20.049179 master-0 kubenswrapper[4409]: I1203 14:28:20.049123 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 03 14:28:20.049457 master-0 kubenswrapper[4409]: I1203 14:28:20.049125 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 03 14:28:20.067956 master-0 kubenswrapper[4409]: I1203 14:28:20.067905 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 03 14:28:20.121386 master-0 kubenswrapper[4409]: I1203 14:28:20.121321 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-6sltv" Dec 03 14:28:20.260652 master-0 kubenswrapper[4409]: I1203 14:28:20.260546 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 03 14:28:20.367315 master-0 kubenswrapper[4409]: I1203 14:28:20.367232 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 03 14:28:20.456424 master-0 kubenswrapper[4409]: I1203 14:28:20.456358 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Dec 03 14:28:20.518891 master-0 kubenswrapper[4409]: I1203 14:28:20.518765 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 03 14:28:20.961963 master-0 kubenswrapper[4409]: I1203 14:28:20.961860 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 03 14:28:21.124438 master-0 kubenswrapper[4409]: I1203 14:28:21.124369 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 03 14:28:21.129392 master-0 kubenswrapper[4409]: I1203 14:28:21.129367 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 03 14:28:21.733440 master-0 kubenswrapper[4409]: I1203 14:28:21.733382 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 03 14:28:21.738667 master-0 kubenswrapper[4409]: I1203 14:28:21.738617 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 03 14:28:21.882046 master-0 kubenswrapper[4409]: I1203 14:28:21.881962 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 03 14:28:22.184626 master-0 kubenswrapper[4409]: I1203 14:28:22.184571 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 03 14:28:22.606354 master-0 kubenswrapper[4409]: I1203 14:28:22.606302 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ff63f5c90356b311bfd02f62719f7c37/startup-monitor/0.log" Dec 03 14:28:22.606519 master-0 kubenswrapper[4409]: I1203 14:28:22.606420 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:28:22.693357 master-0 kubenswrapper[4409]: I1203 14:28:22.693300 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log\") pod \"ff63f5c90356b311bfd02f62719f7c37\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " Dec 03 14:28:22.693600 master-0 kubenswrapper[4409]: I1203 14:28:22.693369 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir\") pod \"ff63f5c90356b311bfd02f62719f7c37\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " Dec 03 14:28:22.693600 master-0 kubenswrapper[4409]: I1203 14:28:22.693473 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock\") pod \"ff63f5c90356b311bfd02f62719f7c37\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " Dec 03 14:28:22.693672 master-0 kubenswrapper[4409]: I1203 14:28:22.693551 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock" (OuterVolumeSpecName: "var-lock") pod "ff63f5c90356b311bfd02f62719f7c37" (UID: "ff63f5c90356b311bfd02f62719f7c37"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:28:22.693750 master-0 kubenswrapper[4409]: I1203 14:28:22.693623 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log" (OuterVolumeSpecName: "var-log") pod "ff63f5c90356b311bfd02f62719f7c37" (UID: "ff63f5c90356b311bfd02f62719f7c37"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:28:22.693750 master-0 kubenswrapper[4409]: I1203 14:28:22.693706 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests" (OuterVolumeSpecName: "manifests") pod "ff63f5c90356b311bfd02f62719f7c37" (UID: "ff63f5c90356b311bfd02f62719f7c37"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:28:22.693750 master-0 kubenswrapper[4409]: I1203 14:28:22.693683 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests\") pod \"ff63f5c90356b311bfd02f62719f7c37\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " Dec 03 14:28:22.693999 master-0 kubenswrapper[4409]: I1203 14:28:22.693949 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir\") pod \"ff63f5c90356b311bfd02f62719f7c37\" (UID: \"ff63f5c90356b311bfd02f62719f7c37\") " Dec 03 14:28:22.694067 master-0 kubenswrapper[4409]: I1203 14:28:22.694037 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ff63f5c90356b311bfd02f62719f7c37" (UID: "ff63f5c90356b311bfd02f62719f7c37"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:28:22.695042 master-0 kubenswrapper[4409]: I1203 14:28:22.694983 4409 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:28:22.695122 master-0 kubenswrapper[4409]: I1203 14:28:22.695055 4409 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-manifests\") on node \"master-0\" DevicePath \"\"" Dec 03 14:28:22.695122 master-0 kubenswrapper[4409]: I1203 14:28:22.695078 4409 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:28:22.695122 master-0 kubenswrapper[4409]: I1203 14:28:22.695100 4409 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-var-log\") on node \"master-0\" DevicePath \"\"" Dec 03 14:28:22.701331 master-0 kubenswrapper[4409]: I1203 14:28:22.701276 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ff63f5c90356b311bfd02f62719f7c37" (UID: "ff63f5c90356b311bfd02f62719f7c37"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:28:22.796341 master-0 kubenswrapper[4409]: I1203 14:28:22.796232 4409 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ff63f5c90356b311bfd02f62719f7c37-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:28:22.957477 master-0 kubenswrapper[4409]: I1203 14:28:22.957290 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ff63f5c90356b311bfd02f62719f7c37/startup-monitor/0.log" Dec 03 14:28:22.957779 master-0 kubenswrapper[4409]: I1203 14:28:22.957646 4409 generic.go:334] "Generic (PLEG): container finished" podID="ff63f5c90356b311bfd02f62719f7c37" containerID="7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7" exitCode=137 Dec 03 14:28:22.957857 master-0 kubenswrapper[4409]: I1203 14:28:22.957762 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Dec 03 14:28:22.957980 master-0 kubenswrapper[4409]: I1203 14:28:22.957924 4409 scope.go:117] "RemoveContainer" containerID="7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7" Dec 03 14:28:22.974418 master-0 kubenswrapper[4409]: I1203 14:28:22.974378 4409 scope.go:117] "RemoveContainer" containerID="7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7" Dec 03 14:28:22.975175 master-0 kubenswrapper[4409]: E1203 14:28:22.975133 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7\": container with ID starting with 7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7 not found: ID does not exist" containerID="7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7" Dec 03 14:28:22.975266 master-0 kubenswrapper[4409]: I1203 14:28:22.975185 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7"} err="failed to get container status \"7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7\": rpc error: code = NotFound desc = could not find container \"7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7\": container with ID starting with 7e72d8170c0cf5e9f1e7b419d1f6c6667141274e8867953b899a5c96badbfea7 not found: ID does not exist" Dec 03 14:28:23.824445 master-0 kubenswrapper[4409]: I1203 14:28:23.824394 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff63f5c90356b311bfd02f62719f7c37" path="/var/lib/kubelet/pods/ff63f5c90356b311bfd02f62719f7c37/volumes" Dec 03 14:28:23.825037 master-0 kubenswrapper[4409]: I1203 14:28:23.824665 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Dec 03 14:28:23.870530 master-0 kubenswrapper[4409]: I1203 14:28:23.870483 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:28:23.870530 master-0 kubenswrapper[4409]: I1203 14:28:23.870526 4409 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="02190bdf-0744-49b2-9901-3f0287b110d2" Dec 03 14:28:23.875774 master-0 kubenswrapper[4409]: I1203 14:28:23.875737 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Dec 03 14:28:23.875774 master-0 kubenswrapper[4409]: I1203 14:28:23.875768 4409 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="02190bdf-0744-49b2-9901-3f0287b110d2" Dec 03 14:28:29.007915 master-0 kubenswrapper[4409]: I1203 14:28:29.007792 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Dec 03 14:28:32.838336 master-0 kubenswrapper[4409]: I1203 14:28:32.838118 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:28:32.839481 master-0 kubenswrapper[4409]: I1203 14:28:32.838525 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:28:32.839481 master-0 kubenswrapper[4409]: I1203 14:28:32.838545 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:28:32.857344 master-0 kubenswrapper[4409]: I1203 14:28:32.857211 4409 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Dec 03 14:28:32.858053 master-0 kubenswrapper[4409]: I1203 14:28:32.857897 4409 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5b0add1-6a3b-4ab3-9334-83f7416876e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:28:32Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-03T14:28:32Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ce3fc232f1c2a929c3d5f87d2fec30dfda38c85b8465c95f77e590f351bde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166052b43852ddf5583cd27464a48563cc65abbde40fae2e7c4572ed6f28a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ce0a4dd19c57bd56c4adeb228a8bf8e724f0a1fecd7eb3e361ff0396ef4196c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-03T14:27:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}]}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-master-0\": pods \"openshift-kube-scheduler-master-0\" not found" Dec 03 14:28:32.859278 master-0 kubenswrapper[4409]: I1203 14:28:32.859244 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:28:32.866250 master-0 kubenswrapper[4409]: I1203 14:28:32.866166 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:28:32.880247 master-0 kubenswrapper[4409]: I1203 14:28:32.880167 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Dec 03 14:28:33.033680 master-0 kubenswrapper[4409]: I1203 14:28:33.033584 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:28:33.033680 master-0 kubenswrapper[4409]: I1203 14:28:33.033645 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c5b0add1-6a3b-4ab3-9334-83f7416876e4" Dec 03 14:28:34.796126 master-0 kubenswrapper[4409]: I1203 14:28:34.795983 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:28:34.796126 master-0 kubenswrapper[4409]: I1203 14:28:34.796132 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:28:37.855042 master-0 kubenswrapper[4409]: I1203 14:28:37.854053 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=5.853951628 podStartE2EDuration="5.853951628s" podCreationTimestamp="2025-12-03 14:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:28:37.852191369 +0000 UTC m=+150.179253975" watchObservedRunningTime="2025-12-03 14:28:37.853951628 +0000 UTC m=+150.181014134" Dec 03 14:28:39.274404 master-0 kubenswrapper[4409]: I1203 14:28:39.274290 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 03 14:28:50.507981 master-0 kubenswrapper[4409]: I1203 14:28:50.507898 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Dec 03 14:28:50.595422 master-0 kubenswrapper[4409]: I1203 14:28:50.595297 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 03 14:28:59.537336 master-0 kubenswrapper[4409]: I1203 14:28:59.537265 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 03 14:29:04.795328 master-0 kubenswrapper[4409]: I1203 14:29:04.795230 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:29:04.795328 master-0 kubenswrapper[4409]: I1203 14:29:04.795323 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:29:34.795816 master-0 kubenswrapper[4409]: I1203 14:29:34.795622 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:29:34.795816 master-0 kubenswrapper[4409]: I1203 14:29:34.795707 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:29:34.795816 master-0 kubenswrapper[4409]: I1203 14:29:34.795777 4409 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:29:34.798331 master-0 kubenswrapper[4409]: I1203 14:29:34.798150 4409 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e"} pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:29:34.798429 master-0 kubenswrapper[4409]: I1203 14:29:34.798401 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" containerID="cri-o://8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e" gracePeriod=600 Dec 03 14:29:35.509376 master-0 kubenswrapper[4409]: I1203 14:29:35.509327 4409 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e" exitCode=0 Dec 03 14:29:35.509376 master-0 kubenswrapper[4409]: I1203 14:29:35.509356 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e"} Dec 03 14:29:35.509376 master-0 kubenswrapper[4409]: I1203 14:29:35.509440 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"7361af56804186d5550e28276c7b5d986703bc3ec5adf055804564cd0be8d594"} Dec 03 14:30:37.672215 master-0 kubenswrapper[4409]: E1203 14:30:37.672134 4409 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 03 14:31:00.281657 master-0 kubenswrapper[4409]: I1203 14:31:00.281583 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: E1203 14:31:00.281928 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff63f5c90356b311bfd02f62719f7c37" containerName="startup-monitor" Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: I1203 14:31:00.281951 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff63f5c90356b311bfd02f62719f7c37" containerName="startup-monitor" Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: E1203 14:31:00.281980 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" containerName="installer" Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: I1203 14:31:00.281986 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" containerName="installer" Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: I1203 14:31:00.282180 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff63f5c90356b311bfd02f62719f7c37" containerName="startup-monitor" Dec 03 14:31:00.282358 master-0 kubenswrapper[4409]: I1203 14:31:00.282208 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be147fe-84e2-429b-9d53-91fd67fef7c4" containerName="installer" Dec 03 14:31:00.283039 master-0 kubenswrapper[4409]: I1203 14:31:00.282987 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.299388 master-0 kubenswrapper[4409]: I1203 14:31:00.299311 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:00.302031 master-0 kubenswrapper[4409]: I1203 14:31:00.301286 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.344073 master-0 kubenswrapper[4409]: I1203 14:31:00.343972 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:00.348398 master-0 kubenswrapper[4409]: I1203 14:31:00.345742 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.354204 master-0 kubenswrapper[4409]: I1203 14:31:00.353078 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:00.355337 master-0 kubenswrapper[4409]: I1203 14:31:00.355296 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.355992 master-0 kubenswrapper[4409]: I1203 14:31:00.355941 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.356280 master-0 kubenswrapper[4409]: I1203 14:31:00.355991 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7pl4\" (UniqueName: \"kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.356454 master-0 kubenswrapper[4409]: I1203 14:31:00.356373 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.358602 master-0 kubenswrapper[4409]: I1203 14:31:00.358529 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:00.360881 master-0 kubenswrapper[4409]: I1203 14:31:00.360827 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.363713 master-0 kubenswrapper[4409]: I1203 14:31:00.363663 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z"] Dec 03 14:31:00.365129 master-0 kubenswrapper[4409]: I1203 14:31:00.365097 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.369517 master-0 kubenswrapper[4409]: I1203 14:31:00.369457 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:31:00.369762 master-0 kubenswrapper[4409]: I1203 14:31:00.369457 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-qldhm" Dec 03 14:31:00.373473 master-0 kubenswrapper[4409]: I1203 14:31:00.372148 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z"] Dec 03 14:31:00.377723 master-0 kubenswrapper[4409]: I1203 14:31:00.377636 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:31:00.387181 master-0 kubenswrapper[4409]: I1203 14:31:00.385259 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:00.392994 master-0 kubenswrapper[4409]: I1203 14:31:00.392937 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:00.399418 master-0 kubenswrapper[4409]: I1203 14:31:00.399366 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:00.413168 master-0 kubenswrapper[4409]: I1203 14:31:00.412824 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:00.459737 master-0 kubenswrapper[4409]: I1203 14:31:00.459631 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.459737 master-0 kubenswrapper[4409]: I1203 14:31:00.459721 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.459737 master-0 kubenswrapper[4409]: I1203 14:31:00.459756 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.459777 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.459802 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.459927 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.459981 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnv8\" (UniqueName: \"kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460051 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460074 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cql\" (UniqueName: \"kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460109 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460129 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7pl4\" (UniqueName: \"kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460156 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460183 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.460246 master-0 kubenswrapper[4409]: I1203 14:31:00.460211 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.460741 master-0 kubenswrapper[4409]: I1203 14:31:00.460555 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.460741 master-0 kubenswrapper[4409]: I1203 14:31:00.460669 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlf5\" (UniqueName: \"kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.460839 master-0 kubenswrapper[4409]: I1203 14:31:00.460757 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.460839 master-0 kubenswrapper[4409]: I1203 14:31:00.460780 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.460839 master-0 kubenswrapper[4409]: I1203 14:31:00.460813 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.461038 master-0 kubenswrapper[4409]: I1203 14:31:00.460843 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.461038 master-0 kubenswrapper[4409]: I1203 14:31:00.460876 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8m4s\" (UniqueName: \"kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.461038 master-0 kubenswrapper[4409]: I1203 14:31:00.460934 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msww\" (UniqueName: \"kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.461274 master-0 kubenswrapper[4409]: I1203 14:31:00.461233 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.461605 master-0 kubenswrapper[4409]: I1203 14:31:00.461569 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.477430 master-0 kubenswrapper[4409]: I1203 14:31:00.477373 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7pl4\" (UniqueName: \"kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4\") pod \"redhat-operators-fh5cv\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.562362 master-0 kubenswrapper[4409]: I1203 14:31:00.562282 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.562362 master-0 kubenswrapper[4409]: I1203 14:31:00.562350 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562384 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562416 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562443 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlf5\" (UniqueName: \"kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562472 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562491 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562534 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562557 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8m4s\" (UniqueName: \"kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562590 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msww\" (UniqueName: \"kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562619 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562654 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.562694 master-0 kubenswrapper[4409]: I1203 14:31:00.562693 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562724 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562754 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562809 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562844 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnv8\" (UniqueName: \"kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562875 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.562897 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2cql\" (UniqueName: \"kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.563628 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.563771 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.563866 master-0 kubenswrapper[4409]: I1203 14:31:00.563822 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.564478 master-0 kubenswrapper[4409]: I1203 14:31:00.564170 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.565431 master-0 kubenswrapper[4409]: I1203 14:31:00.565342 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.566971 master-0 kubenswrapper[4409]: I1203 14:31:00.566912 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.566971 master-0 kubenswrapper[4409]: I1203 14:31:00.566942 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.567903 master-0 kubenswrapper[4409]: I1203 14:31:00.567134 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.567903 master-0 kubenswrapper[4409]: I1203 14:31:00.567721 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.569088 master-0 kubenswrapper[4409]: I1203 14:31:00.568509 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.569088 master-0 kubenswrapper[4409]: I1203 14:31:00.568697 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.571841 master-0 kubenswrapper[4409]: I1203 14:31:00.571784 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.577809 master-0 kubenswrapper[4409]: I1203 14:31:00.577757 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.578766 master-0 kubenswrapper[4409]: I1203 14:31:00.578710 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.581652 master-0 kubenswrapper[4409]: I1203 14:31:00.581600 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msww\" (UniqueName: \"kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww\") pod \"collect-profiles-29412870-qng6z\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:00.582082 master-0 kubenswrapper[4409]: I1203 14:31:00.581987 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8m4s\" (UniqueName: \"kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s\") pod \"console-6f689c85c4-fv97m\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.582392 master-0 kubenswrapper[4409]: I1203 14:31:00.582335 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlf5\" (UniqueName: \"kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5\") pod \"community-operators-szmjn\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.588054 master-0 kubenswrapper[4409]: I1203 14:31:00.587953 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2cql\" (UniqueName: \"kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql\") pod \"redhat-marketplace-vrhjw\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.588054 master-0 kubenswrapper[4409]: I1203 14:31:00.587992 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnv8\" (UniqueName: \"kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8\") pod \"certified-operators-5msvs\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.619535 master-0 kubenswrapper[4409]: I1203 14:31:00.619404 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:00.637706 master-0 kubenswrapper[4409]: I1203 14:31:00.635649 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:00.671378 master-0 kubenswrapper[4409]: I1203 14:31:00.671265 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:00.687430 master-0 kubenswrapper[4409]: I1203 14:31:00.687362 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:00.707333 master-0 kubenswrapper[4409]: I1203 14:31:00.707279 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:00.723753 master-0 kubenswrapper[4409]: I1203 14:31:00.723705 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:01.316763 master-0 kubenswrapper[4409]: I1203 14:31:01.316663 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:31:01.652045 master-0 kubenswrapper[4409]: W1203 14:31:01.651170 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2526e28d_9fea_4b08_a5da_599f7fb81b1e.slice/crio-12bb2f64d9873250f7390dfa7388c4cb78ce540528950a9788086d697cb79a20 WatchSource:0}: Error finding container 12bb2f64d9873250f7390dfa7388c4cb78ce540528950a9788086d697cb79a20: Status 404 returned error can't find the container with id 12bb2f64d9873250f7390dfa7388c4cb78ce540528950a9788086d697cb79a20 Dec 03 14:31:01.659239 master-0 kubenswrapper[4409]: I1203 14:31:01.659187 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:01.666363 master-0 kubenswrapper[4409]: W1203 14:31:01.666307 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e8aa1b1_5bf2_4763_9338_ef927345f786.slice/crio-e70721bdf99f549fdd6faeef2cf23d6c7b67cba5a93bfd6752fed504d359f9e6 WatchSource:0}: Error finding container e70721bdf99f549fdd6faeef2cf23d6c7b67cba5a93bfd6752fed504d359f9e6: Status 404 returned error can't find the container with id e70721bdf99f549fdd6faeef2cf23d6c7b67cba5a93bfd6752fed504d359f9e6 Dec 03 14:31:01.677331 master-0 kubenswrapper[4409]: I1203 14:31:01.677272 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:01.690770 master-0 kubenswrapper[4409]: I1203 14:31:01.690716 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z"] Dec 03 14:31:01.701658 master-0 kubenswrapper[4409]: I1203 14:31:01.701605 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:01.707288 master-0 kubenswrapper[4409]: I1203 14:31:01.707243 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:02.307847 master-0 kubenswrapper[4409]: I1203 14:31:02.307695 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" event={"ID":"ce0878b7-82ed-4efb-b984-38ad8fd8f185","Type":"ContainerStarted","Data":"944a8ba16af8c2f15dffcc63c44fa8b3e9435790d67ac52c03c0515952f351af"} Dec 03 14:31:02.308369 master-0 kubenswrapper[4409]: I1203 14:31:02.307917 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" event={"ID":"ce0878b7-82ed-4efb-b984-38ad8fd8f185","Type":"ContainerStarted","Data":"4bc43f0b61d34d57420fe29143200ddad2d11572987136a95421d2b30655b8d5"} Dec 03 14:31:02.315705 master-0 kubenswrapper[4409]: I1203 14:31:02.315613 4409 generic.go:334] "Generic (PLEG): container finished" podID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerID="56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37" exitCode=0 Dec 03 14:31:02.315962 master-0 kubenswrapper[4409]: I1203 14:31:02.315709 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerDied","Data":"56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37"} Dec 03 14:31:02.315962 master-0 kubenswrapper[4409]: I1203 14:31:02.315766 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerStarted","Data":"e70721bdf99f549fdd6faeef2cf23d6c7b67cba5a93bfd6752fed504d359f9e6"} Dec 03 14:31:02.318277 master-0 kubenswrapper[4409]: I1203 14:31:02.318207 4409 generic.go:334] "Generic (PLEG): container finished" podID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerID="596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3" exitCode=0 Dec 03 14:31:02.319225 master-0 kubenswrapper[4409]: I1203 14:31:02.318301 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerDied","Data":"596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3"} Dec 03 14:31:02.319225 master-0 kubenswrapper[4409]: I1203 14:31:02.318328 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerStarted","Data":"2eab94e6602a4e497b524643aa98c2e042c5f0ea806f6e43a0571d4da639470b"} Dec 03 14:31:02.320143 master-0 kubenswrapper[4409]: I1203 14:31:02.320092 4409 generic.go:334] "Generic (PLEG): container finished" podID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerID="3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8" exitCode=0 Dec 03 14:31:02.320308 master-0 kubenswrapper[4409]: I1203 14:31:02.320164 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerDied","Data":"3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8"} Dec 03 14:31:02.320308 master-0 kubenswrapper[4409]: I1203 14:31:02.320190 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerStarted","Data":"0324123106df7539cf7bdb24353bb930a8eb4fbe914b1a6f91adab2b238d4ce4"} Dec 03 14:31:02.322540 master-0 kubenswrapper[4409]: I1203 14:31:02.322486 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f689c85c4-fv97m" event={"ID":"50223b50-44db-4dad-95c9-fcd93aad3c7c","Type":"ContainerStarted","Data":"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a"} Dec 03 14:31:02.322540 master-0 kubenswrapper[4409]: I1203 14:31:02.322517 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f689c85c4-fv97m" event={"ID":"50223b50-44db-4dad-95c9-fcd93aad3c7c","Type":"ContainerStarted","Data":"1ea64a1801dd5f4d806c35660c47d2849deb9d33308a1112b3cc10348fb41328"} Dec 03 14:31:02.324045 master-0 kubenswrapper[4409]: I1203 14:31:02.323958 4409 generic.go:334] "Generic (PLEG): container finished" podID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerID="7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4" exitCode=0 Dec 03 14:31:02.324186 master-0 kubenswrapper[4409]: I1203 14:31:02.324077 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerDied","Data":"7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4"} Dec 03 14:31:02.324355 master-0 kubenswrapper[4409]: I1203 14:31:02.324183 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerStarted","Data":"12bb2f64d9873250f7390dfa7388c4cb78ce540528950a9788086d697cb79a20"} Dec 03 14:31:02.647841 master-0 kubenswrapper[4409]: I1203 14:31:02.647712 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" podStartSLOduration=62.647672117 podStartE2EDuration="1m2.647672117s" podCreationTimestamp="2025-12-03 14:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:31:02.643107727 +0000 UTC m=+294.970170273" watchObservedRunningTime="2025-12-03 14:31:02.647672117 +0000 UTC m=+294.974734633" Dec 03 14:31:03.064132 master-0 kubenswrapper[4409]: I1203 14:31:03.063930 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f689c85c4-fv97m" podStartSLOduration=276.063906106 podStartE2EDuration="4m36.063906106s" podCreationTimestamp="2025-12-03 14:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:31:03.063161495 +0000 UTC m=+295.390224001" watchObservedRunningTime="2025-12-03 14:31:03.063906106 +0000 UTC m=+295.390968612" Dec 03 14:31:03.349081 master-0 kubenswrapper[4409]: I1203 14:31:03.348873 4409 generic.go:334] "Generic (PLEG): container finished" podID="ce0878b7-82ed-4efb-b984-38ad8fd8f185" containerID="944a8ba16af8c2f15dffcc63c44fa8b3e9435790d67ac52c03c0515952f351af" exitCode=0 Dec 03 14:31:03.349988 master-0 kubenswrapper[4409]: I1203 14:31:03.349200 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" event={"ID":"ce0878b7-82ed-4efb-b984-38ad8fd8f185","Type":"ContainerDied","Data":"944a8ba16af8c2f15dffcc63c44fa8b3e9435790d67ac52c03c0515952f351af"} Dec 03 14:31:04.617516 master-0 kubenswrapper[4409]: I1203 14:31:04.617416 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:04.738489 master-0 kubenswrapper[4409]: I1203 14:31:04.738359 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msww\" (UniqueName: \"kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww\") pod \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " Dec 03 14:31:04.738489 master-0 kubenswrapper[4409]: I1203 14:31:04.738432 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume\") pod \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " Dec 03 14:31:04.738787 master-0 kubenswrapper[4409]: I1203 14:31:04.738581 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume\") pod \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\" (UID: \"ce0878b7-82ed-4efb-b984-38ad8fd8f185\") " Dec 03 14:31:04.739290 master-0 kubenswrapper[4409]: I1203 14:31:04.739240 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce0878b7-82ed-4efb-b984-38ad8fd8f185" (UID: "ce0878b7-82ed-4efb-b984-38ad8fd8f185"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:31:04.739674 master-0 kubenswrapper[4409]: I1203 14:31:04.739653 4409 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce0878b7-82ed-4efb-b984-38ad8fd8f185-config-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:04.744287 master-0 kubenswrapper[4409]: I1203 14:31:04.744180 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce0878b7-82ed-4efb-b984-38ad8fd8f185" (UID: "ce0878b7-82ed-4efb-b984-38ad8fd8f185"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:31:04.744585 master-0 kubenswrapper[4409]: I1203 14:31:04.744552 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww" (OuterVolumeSpecName: "kube-api-access-9msww") pod "ce0878b7-82ed-4efb-b984-38ad8fd8f185" (UID: "ce0878b7-82ed-4efb-b984-38ad8fd8f185"). InnerVolumeSpecName "kube-api-access-9msww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:04.841118 master-0 kubenswrapper[4409]: I1203 14:31:04.841027 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9msww\" (UniqueName: \"kubernetes.io/projected/ce0878b7-82ed-4efb-b984-38ad8fd8f185-kube-api-access-9msww\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:04.841118 master-0 kubenswrapper[4409]: I1203 14:31:04.841090 4409 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce0878b7-82ed-4efb-b984-38ad8fd8f185-secret-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:05.365930 master-0 kubenswrapper[4409]: I1203 14:31:05.365838 4409 generic.go:334] "Generic (PLEG): container finished" podID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerID="998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30" exitCode=0 Dec 03 14:31:05.366278 master-0 kubenswrapper[4409]: I1203 14:31:05.365910 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerDied","Data":"998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30"} Dec 03 14:31:05.368764 master-0 kubenswrapper[4409]: I1203 14:31:05.368687 4409 generic.go:334] "Generic (PLEG): container finished" podID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerID="e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e" exitCode=0 Dec 03 14:31:05.368851 master-0 kubenswrapper[4409]: I1203 14:31:05.368823 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerDied","Data":"e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e"} Dec 03 14:31:05.371589 master-0 kubenswrapper[4409]: I1203 14:31:05.371550 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" event={"ID":"ce0878b7-82ed-4efb-b984-38ad8fd8f185","Type":"ContainerDied","Data":"4bc43f0b61d34d57420fe29143200ddad2d11572987136a95421d2b30655b8d5"} Dec 03 14:31:05.371678 master-0 kubenswrapper[4409]: I1203 14:31:05.371586 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412870-qng6z" Dec 03 14:31:05.371678 master-0 kubenswrapper[4409]: I1203 14:31:05.371597 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc43f0b61d34d57420fe29143200ddad2d11572987136a95421d2b30655b8d5" Dec 03 14:31:05.374640 master-0 kubenswrapper[4409]: I1203 14:31:05.373562 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerStarted","Data":"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55"} Dec 03 14:31:05.377477 master-0 kubenswrapper[4409]: I1203 14:31:05.377445 4409 generic.go:334] "Generic (PLEG): container finished" podID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerID="84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96" exitCode=0 Dec 03 14:31:05.377593 master-0 kubenswrapper[4409]: I1203 14:31:05.377482 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerDied","Data":"84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96"} Dec 03 14:31:06.386665 master-0 kubenswrapper[4409]: I1203 14:31:06.386606 4409 generic.go:334] "Generic (PLEG): container finished" podID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerID="d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55" exitCode=0 Dec 03 14:31:06.386665 master-0 kubenswrapper[4409]: I1203 14:31:06.386658 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerDied","Data":"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55"} Dec 03 14:31:07.396762 master-0 kubenswrapper[4409]: I1203 14:31:07.396712 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerStarted","Data":"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a"} Dec 03 14:31:07.400085 master-0 kubenswrapper[4409]: I1203 14:31:07.400036 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerStarted","Data":"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2"} Dec 03 14:31:07.404339 master-0 kubenswrapper[4409]: I1203 14:31:07.404304 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerStarted","Data":"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba"} Dec 03 14:31:07.719536 master-0 kubenswrapper[4409]: I1203 14:31:07.719453 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-szmjn" podStartSLOduration=254.441950724 podStartE2EDuration="4m18.719433523s" podCreationTimestamp="2025-12-03 14:26:49 +0000 UTC" firstStartedPulling="2025-12-03 14:31:02.321169097 +0000 UTC m=+294.648231603" lastFinishedPulling="2025-12-03 14:31:06.598651896 +0000 UTC m=+298.925714402" observedRunningTime="2025-12-03 14:31:07.71369298 +0000 UTC m=+300.040755486" watchObservedRunningTime="2025-12-03 14:31:07.719433523 +0000 UTC m=+300.046496029" Dec 03 14:31:07.828910 master-0 kubenswrapper[4409]: I1203 14:31:07.828782 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5msvs" podStartSLOduration=254.557744241 podStartE2EDuration="4m18.828760837s" podCreationTimestamp="2025-12-03 14:26:49 +0000 UTC" firstStartedPulling="2025-12-03 14:31:02.321552928 +0000 UTC m=+294.648615434" lastFinishedPulling="2025-12-03 14:31:06.592569524 +0000 UTC m=+298.919632030" observedRunningTime="2025-12-03 14:31:07.825885215 +0000 UTC m=+300.152947731" watchObservedRunningTime="2025-12-03 14:31:07.828760837 +0000 UTC m=+300.155823343" Dec 03 14:31:07.835486 master-0 kubenswrapper[4409]: I1203 14:31:07.835419 4409 kubelet.go:1505] "Image garbage collection succeeded" Dec 03 14:31:07.925896 master-0 kubenswrapper[4409]: I1203 14:31:07.925547 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrhjw" podStartSLOduration=253.60611564 podStartE2EDuration="4m17.925520825s" podCreationTimestamp="2025-12-03 14:26:50 +0000 UTC" firstStartedPulling="2025-12-03 14:31:02.325446708 +0000 UTC m=+294.652509214" lastFinishedPulling="2025-12-03 14:31:06.644851893 +0000 UTC m=+298.971914399" observedRunningTime="2025-12-03 14:31:07.92251767 +0000 UTC m=+300.249580186" watchObservedRunningTime="2025-12-03 14:31:07.925520825 +0000 UTC m=+300.252583331" Dec 03 14:31:08.414362 master-0 kubenswrapper[4409]: I1203 14:31:08.414203 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerStarted","Data":"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310"} Dec 03 14:31:08.556307 master-0 kubenswrapper[4409]: I1203 14:31:08.556176 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fh5cv" podStartSLOduration=248.606641994 podStartE2EDuration="4m13.556142061s" podCreationTimestamp="2025-12-03 14:26:55 +0000 UTC" firstStartedPulling="2025-12-03 14:31:02.31738199 +0000 UTC m=+294.644444516" lastFinishedPulling="2025-12-03 14:31:07.266882077 +0000 UTC m=+299.593944583" observedRunningTime="2025-12-03 14:31:08.549861633 +0000 UTC m=+300.876924139" watchObservedRunningTime="2025-12-03 14:31:08.556142061 +0000 UTC m=+300.883204567" Dec 03 14:31:10.620535 master-0 kubenswrapper[4409]: I1203 14:31:10.620455 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:10.620535 master-0 kubenswrapper[4409]: I1203 14:31:10.620533 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:10.626405 master-0 kubenswrapper[4409]: I1203 14:31:10.626352 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:10.639606 master-0 kubenswrapper[4409]: I1203 14:31:10.639554 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:10.639606 master-0 kubenswrapper[4409]: I1203 14:31:10.639602 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:10.671679 master-0 kubenswrapper[4409]: I1203 14:31:10.671633 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:10.671908 master-0 kubenswrapper[4409]: I1203 14:31:10.671696 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:10.687634 master-0 kubenswrapper[4409]: I1203 14:31:10.687578 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:10.687634 master-0 kubenswrapper[4409]: I1203 14:31:10.687632 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:10.707705 master-0 kubenswrapper[4409]: I1203 14:31:10.707634 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:10.708471 master-0 kubenswrapper[4409]: I1203 14:31:10.708392 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:10.724762 master-0 kubenswrapper[4409]: I1203 14:31:10.724694 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:10.727818 master-0 kubenswrapper[4409]: I1203 14:31:10.727780 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:10.749133 master-0 kubenswrapper[4409]: I1203 14:31:10.749041 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:11.438953 master-0 kubenswrapper[4409]: I1203 14:31:11.438870 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:31:11.476751 master-0 kubenswrapper[4409]: I1203 14:31:11.476679 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:11.494957 master-0 kubenswrapper[4409]: I1203 14:31:11.494863 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:11.679830 master-0 kubenswrapper[4409]: I1203 14:31:11.679756 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fh5cv" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="registry-server" probeResult="failure" output=< Dec 03 14:31:11.679830 master-0 kubenswrapper[4409]: timeout: failed to connect service ":50051" within 1s Dec 03 14:31:11.679830 master-0 kubenswrapper[4409]: > Dec 03 14:31:12.391422 master-0 kubenswrapper[4409]: I1203 14:31:12.388970 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:31:13.884210 master-0 kubenswrapper[4409]: I1203 14:31:13.883988 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:13.885083 master-0 kubenswrapper[4409]: I1203 14:31:13.884862 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrhjw" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="registry-server" containerID="cri-o://022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba" gracePeriod=2 Dec 03 14:31:14.310662 master-0 kubenswrapper[4409]: I1203 14:31:14.310609 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:14.828069 master-0 kubenswrapper[4409]: I1203 14:31:14.826793 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2cql\" (UniqueName: \"kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql\") pod \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " Dec 03 14:31:14.828069 master-0 kubenswrapper[4409]: I1203 14:31:14.827065 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content\") pod \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " Dec 03 14:31:14.828441 master-0 kubenswrapper[4409]: I1203 14:31:14.828374 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities\") pod \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\" (UID: \"2526e28d-9fea-4b08-a5da-599f7fb81b1e\") " Dec 03 14:31:14.834749 master-0 kubenswrapper[4409]: I1203 14:31:14.832758 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities" (OuterVolumeSpecName: "utilities") pod "2526e28d-9fea-4b08-a5da-599f7fb81b1e" (UID: "2526e28d-9fea-4b08-a5da-599f7fb81b1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:14.834749 master-0 kubenswrapper[4409]: I1203 14:31:14.834541 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql" (OuterVolumeSpecName: "kube-api-access-k2cql") pod "2526e28d-9fea-4b08-a5da-599f7fb81b1e" (UID: "2526e28d-9fea-4b08-a5da-599f7fb81b1e"). InnerVolumeSpecName "kube-api-access-k2cql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:14.840134 master-0 kubenswrapper[4409]: I1203 14:31:14.840071 4409 generic.go:334] "Generic (PLEG): container finished" podID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerID="022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba" exitCode=0 Dec 03 14:31:14.840233 master-0 kubenswrapper[4409]: I1203 14:31:14.840141 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerDied","Data":"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba"} Dec 03 14:31:14.840233 master-0 kubenswrapper[4409]: I1203 14:31:14.840186 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrhjw" event={"ID":"2526e28d-9fea-4b08-a5da-599f7fb81b1e","Type":"ContainerDied","Data":"12bb2f64d9873250f7390dfa7388c4cb78ce540528950a9788086d697cb79a20"} Dec 03 14:31:14.840312 master-0 kubenswrapper[4409]: I1203 14:31:14.840273 4409 scope.go:117] "RemoveContainer" containerID="022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba" Dec 03 14:31:14.840511 master-0 kubenswrapper[4409]: I1203 14:31:14.840483 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrhjw" Dec 03 14:31:14.850218 master-0 kubenswrapper[4409]: I1203 14:31:14.850171 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2526e28d-9fea-4b08-a5da-599f7fb81b1e" (UID: "2526e28d-9fea-4b08-a5da-599f7fb81b1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:14.866094 master-0 kubenswrapper[4409]: I1203 14:31:14.866025 4409 scope.go:117] "RemoveContainer" containerID="e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e" Dec 03 14:31:14.898179 master-0 kubenswrapper[4409]: I1203 14:31:14.896788 4409 scope.go:117] "RemoveContainer" containerID="7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4" Dec 03 14:31:14.913982 master-0 kubenswrapper[4409]: I1203 14:31:14.913470 4409 scope.go:117] "RemoveContainer" containerID="022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba" Dec 03 14:31:14.914214 master-0 kubenswrapper[4409]: E1203 14:31:14.914162 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba\": container with ID starting with 022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba not found: ID does not exist" containerID="022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba" Dec 03 14:31:14.914274 master-0 kubenswrapper[4409]: I1203 14:31:14.914220 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba"} err="failed to get container status \"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba\": rpc error: code = NotFound desc = could not find container \"022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba\": container with ID starting with 022fb29ca4a2bc083fcb4b1581e25750d2d5638aea2a156fff484ed9bae549ba not found: ID does not exist" Dec 03 14:31:14.914274 master-0 kubenswrapper[4409]: I1203 14:31:14.914261 4409 scope.go:117] "RemoveContainer" containerID="e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e" Dec 03 14:31:14.914880 master-0 kubenswrapper[4409]: E1203 14:31:14.914817 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e\": container with ID starting with e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e not found: ID does not exist" containerID="e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e" Dec 03 14:31:14.914955 master-0 kubenswrapper[4409]: I1203 14:31:14.914897 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e"} err="failed to get container status \"e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e\": rpc error: code = NotFound desc = could not find container \"e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e\": container with ID starting with e910ec1b5c79764d2b903d8aa2ccacbe2035ef3942c56e58af2805dd58e5506e not found: ID does not exist" Dec 03 14:31:14.914955 master-0 kubenswrapper[4409]: I1203 14:31:14.914942 4409 scope.go:117] "RemoveContainer" containerID="7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4" Dec 03 14:31:14.915595 master-0 kubenswrapper[4409]: E1203 14:31:14.915553 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4\": container with ID starting with 7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4 not found: ID does not exist" containerID="7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4" Dec 03 14:31:14.915727 master-0 kubenswrapper[4409]: I1203 14:31:14.915691 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4"} err="failed to get container status \"7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4\": rpc error: code = NotFound desc = could not find container \"7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4\": container with ID starting with 7d892f7065ecc42461458f14d6865264aec8f08fbf5478325562d527a561ada4 not found: ID does not exist" Dec 03 14:31:14.932753 master-0 kubenswrapper[4409]: I1203 14:31:14.932675 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:14.933160 master-0 kubenswrapper[4409]: I1203 14:31:14.933146 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2526e28d-9fea-4b08-a5da-599f7fb81b1e-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:14.933248 master-0 kubenswrapper[4409]: I1203 14:31:14.933231 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2cql\" (UniqueName: \"kubernetes.io/projected/2526e28d-9fea-4b08-a5da-599f7fb81b1e-kube-api-access-k2cql\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:16.353469 master-0 kubenswrapper[4409]: I1203 14:31:16.353398 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:19.051328 master-0 kubenswrapper[4409]: I1203 14:31:19.051211 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrhjw"] Dec 03 14:31:19.088344 master-0 kubenswrapper[4409]: I1203 14:31:19.088253 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:19.088726 master-0 kubenswrapper[4409]: I1203 14:31:19.088630 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-szmjn" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="registry-server" containerID="cri-o://287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a" gracePeriod=2 Dec 03 14:31:19.582552 master-0 kubenswrapper[4409]: I1203 14:31:19.582506 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:19.724907 master-0 kubenswrapper[4409]: I1203 14:31:19.724845 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlf5\" (UniqueName: \"kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5\") pod \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " Dec 03 14:31:19.725235 master-0 kubenswrapper[4409]: I1203 14:31:19.725109 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities\") pod \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " Dec 03 14:31:19.725235 master-0 kubenswrapper[4409]: I1203 14:31:19.725138 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content\") pod \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\" (UID: \"f7f59cd6-2b04-4124-b4f3-e97bd3684f57\") " Dec 03 14:31:19.727091 master-0 kubenswrapper[4409]: I1203 14:31:19.726283 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities" (OuterVolumeSpecName: "utilities") pod "f7f59cd6-2b04-4124-b4f3-e97bd3684f57" (UID: "f7f59cd6-2b04-4124-b4f3-e97bd3684f57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:19.727682 master-0 kubenswrapper[4409]: I1203 14:31:19.727609 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5" (OuterVolumeSpecName: "kube-api-access-jzlf5") pod "f7f59cd6-2b04-4124-b4f3-e97bd3684f57" (UID: "f7f59cd6-2b04-4124-b4f3-e97bd3684f57"). InnerVolumeSpecName "kube-api-access-jzlf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:19.776324 master-0 kubenswrapper[4409]: I1203 14:31:19.776062 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7f59cd6-2b04-4124-b4f3-e97bd3684f57" (UID: "f7f59cd6-2b04-4124-b4f3-e97bd3684f57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:19.825063 master-0 kubenswrapper[4409]: I1203 14:31:19.824975 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" path="/var/lib/kubelet/pods/2526e28d-9fea-4b08-a5da-599f7fb81b1e/volumes" Dec 03 14:31:19.827709 master-0 kubenswrapper[4409]: I1203 14:31:19.827669 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:19.827709 master-0 kubenswrapper[4409]: I1203 14:31:19.827707 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:19.827877 master-0 kubenswrapper[4409]: I1203 14:31:19.827720 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzlf5\" (UniqueName: \"kubernetes.io/projected/f7f59cd6-2b04-4124-b4f3-e97bd3684f57-kube-api-access-jzlf5\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:19.888313 master-0 kubenswrapper[4409]: I1203 14:31:19.888250 4409 generic.go:334] "Generic (PLEG): container finished" podID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerID="287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a" exitCode=0 Dec 03 14:31:19.888313 master-0 kubenswrapper[4409]: I1203 14:31:19.888301 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerDied","Data":"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a"} Dec 03 14:31:19.888761 master-0 kubenswrapper[4409]: I1203 14:31:19.888334 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-szmjn" event={"ID":"f7f59cd6-2b04-4124-b4f3-e97bd3684f57","Type":"ContainerDied","Data":"2eab94e6602a4e497b524643aa98c2e042c5f0ea806f6e43a0571d4da639470b"} Dec 03 14:31:19.888761 master-0 kubenswrapper[4409]: I1203 14:31:19.888346 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-szmjn" Dec 03 14:31:19.888896 master-0 kubenswrapper[4409]: I1203 14:31:19.888359 4409 scope.go:117] "RemoveContainer" containerID="287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a" Dec 03 14:31:19.906785 master-0 kubenswrapper[4409]: I1203 14:31:19.905721 4409 scope.go:117] "RemoveContainer" containerID="84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96" Dec 03 14:31:19.922274 master-0 kubenswrapper[4409]: I1203 14:31:19.922142 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:19.926482 master-0 kubenswrapper[4409]: I1203 14:31:19.926424 4409 scope.go:117] "RemoveContainer" containerID="596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3" Dec 03 14:31:19.927141 master-0 kubenswrapper[4409]: I1203 14:31:19.927070 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-szmjn"] Dec 03 14:31:19.950227 master-0 kubenswrapper[4409]: I1203 14:31:19.950162 4409 scope.go:117] "RemoveContainer" containerID="287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a" Dec 03 14:31:19.950936 master-0 kubenswrapper[4409]: E1203 14:31:19.950874 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a\": container with ID starting with 287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a not found: ID does not exist" containerID="287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a" Dec 03 14:31:19.951100 master-0 kubenswrapper[4409]: I1203 14:31:19.950942 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a"} err="failed to get container status \"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a\": rpc error: code = NotFound desc = could not find container \"287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a\": container with ID starting with 287a6d80f293c29331939880871c0876b2d8f9a99f69a553cf3c41280364dc6a not found: ID does not exist" Dec 03 14:31:19.951100 master-0 kubenswrapper[4409]: I1203 14:31:19.950990 4409 scope.go:117] "RemoveContainer" containerID="84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96" Dec 03 14:31:19.951659 master-0 kubenswrapper[4409]: E1203 14:31:19.951627 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96\": container with ID starting with 84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96 not found: ID does not exist" containerID="84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96" Dec 03 14:31:19.951946 master-0 kubenswrapper[4409]: I1203 14:31:19.951655 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96"} err="failed to get container status \"84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96\": rpc error: code = NotFound desc = could not find container \"84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96\": container with ID starting with 84be45164a54daae4f10ef8d985a86f155a3c3019c6a9a9d64746aee3f910e96 not found: ID does not exist" Dec 03 14:31:19.951946 master-0 kubenswrapper[4409]: I1203 14:31:19.951678 4409 scope.go:117] "RemoveContainer" containerID="596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3" Dec 03 14:31:19.952181 master-0 kubenswrapper[4409]: E1203 14:31:19.952137 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3\": container with ID starting with 596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3 not found: ID does not exist" containerID="596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3" Dec 03 14:31:19.952283 master-0 kubenswrapper[4409]: I1203 14:31:19.952184 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3"} err="failed to get container status \"596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3\": rpc error: code = NotFound desc = could not find container \"596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3\": container with ID starting with 596ea6ab92caa21c1c3842fca96f3115e8ab7906c596e17af8eaea998d88efa3 not found: ID does not exist" Dec 03 14:31:20.689884 master-0 kubenswrapper[4409]: I1203 14:31:20.689811 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:20.725646 master-0 kubenswrapper[4409]: I1203 14:31:20.725578 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:20.738038 master-0 kubenswrapper[4409]: I1203 14:31:20.737320 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:21.824038 master-0 kubenswrapper[4409]: I1203 14:31:21.823919 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" path="/var/lib/kubelet/pods/f7f59cd6-2b04-4124-b4f3-e97bd3684f57/volumes" Dec 03 14:31:23.129851 master-0 kubenswrapper[4409]: I1203 14:31:23.129753 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:23.130565 master-0 kubenswrapper[4409]: I1203 14:31:23.130152 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fh5cv" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="registry-server" containerID="cri-o://6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310" gracePeriod=2 Dec 03 14:31:23.542675 master-0 kubenswrapper[4409]: I1203 14:31:23.542529 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:23.723052 master-0 kubenswrapper[4409]: I1203 14:31:23.722961 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities\") pod \"5e8aa1b1-5bf2-4763-9338-ef927345f786\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " Dec 03 14:31:23.723052 master-0 kubenswrapper[4409]: I1203 14:31:23.723051 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7pl4\" (UniqueName: \"kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4\") pod \"5e8aa1b1-5bf2-4763-9338-ef927345f786\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " Dec 03 14:31:23.723436 master-0 kubenswrapper[4409]: I1203 14:31:23.723153 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content\") pod \"5e8aa1b1-5bf2-4763-9338-ef927345f786\" (UID: \"5e8aa1b1-5bf2-4763-9338-ef927345f786\") " Dec 03 14:31:23.724324 master-0 kubenswrapper[4409]: I1203 14:31:23.724265 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities" (OuterVolumeSpecName: "utilities") pod "5e8aa1b1-5bf2-4763-9338-ef927345f786" (UID: "5e8aa1b1-5bf2-4763-9338-ef927345f786"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:23.726659 master-0 kubenswrapper[4409]: I1203 14:31:23.726601 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4" (OuterVolumeSpecName: "kube-api-access-w7pl4") pod "5e8aa1b1-5bf2-4763-9338-ef927345f786" (UID: "5e8aa1b1-5bf2-4763-9338-ef927345f786"). InnerVolumeSpecName "kube-api-access-w7pl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:23.824709 master-0 kubenswrapper[4409]: I1203 14:31:23.824625 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:23.824709 master-0 kubenswrapper[4409]: I1203 14:31:23.824676 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7pl4\" (UniqueName: \"kubernetes.io/projected/5e8aa1b1-5bf2-4763-9338-ef927345f786-kube-api-access-w7pl4\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:23.838730 master-0 kubenswrapper[4409]: I1203 14:31:23.838125 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e8aa1b1-5bf2-4763-9338-ef927345f786" (UID: "5e8aa1b1-5bf2-4763-9338-ef927345f786"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:23.926532 master-0 kubenswrapper[4409]: I1203 14:31:23.926458 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8aa1b1-5bf2-4763-9338-ef927345f786-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:23.937467 master-0 kubenswrapper[4409]: I1203 14:31:23.937414 4409 generic.go:334] "Generic (PLEG): container finished" podID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerID="6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310" exitCode=0 Dec 03 14:31:23.937786 master-0 kubenswrapper[4409]: I1203 14:31:23.937470 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerDied","Data":"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310"} Dec 03 14:31:23.937786 master-0 kubenswrapper[4409]: I1203 14:31:23.937543 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fh5cv" event={"ID":"5e8aa1b1-5bf2-4763-9338-ef927345f786","Type":"ContainerDied","Data":"e70721bdf99f549fdd6faeef2cf23d6c7b67cba5a93bfd6752fed504d359f9e6"} Dec 03 14:31:23.937786 master-0 kubenswrapper[4409]: I1203 14:31:23.937584 4409 scope.go:117] "RemoveContainer" containerID="6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310" Dec 03 14:31:23.937786 master-0 kubenswrapper[4409]: I1203 14:31:23.937590 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fh5cv" Dec 03 14:31:23.953386 master-0 kubenswrapper[4409]: I1203 14:31:23.952825 4409 scope.go:117] "RemoveContainer" containerID="d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55" Dec 03 14:31:23.981398 master-0 kubenswrapper[4409]: I1203 14:31:23.981350 4409 scope.go:117] "RemoveContainer" containerID="56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37" Dec 03 14:31:23.986339 master-0 kubenswrapper[4409]: I1203 14:31:23.986266 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:23.993852 master-0 kubenswrapper[4409]: I1203 14:31:23.993731 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fh5cv"] Dec 03 14:31:24.000190 master-0 kubenswrapper[4409]: I1203 14:31:24.000152 4409 scope.go:117] "RemoveContainer" containerID="6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310" Dec 03 14:31:24.000751 master-0 kubenswrapper[4409]: E1203 14:31:24.000689 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310\": container with ID starting with 6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310 not found: ID does not exist" containerID="6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310" Dec 03 14:31:24.000841 master-0 kubenswrapper[4409]: I1203 14:31:24.000752 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310"} err="failed to get container status \"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310\": rpc error: code = NotFound desc = could not find container \"6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310\": container with ID starting with 6e72cb0cd47f0f32ed6a8212ba280ecbc1787b875d1f6355991adeeb38f5e310 not found: ID does not exist" Dec 03 14:31:24.000841 master-0 kubenswrapper[4409]: I1203 14:31:24.000791 4409 scope.go:117] "RemoveContainer" containerID="d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55" Dec 03 14:31:24.001357 master-0 kubenswrapper[4409]: E1203 14:31:24.001312 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55\": container with ID starting with d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55 not found: ID does not exist" containerID="d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55" Dec 03 14:31:24.001440 master-0 kubenswrapper[4409]: I1203 14:31:24.001358 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55"} err="failed to get container status \"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55\": rpc error: code = NotFound desc = could not find container \"d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55\": container with ID starting with d9f654484c5e8bd074ff846ab3746ec68e371f1f7247d1a9cfc195a1585b6e55 not found: ID does not exist" Dec 03 14:31:24.001440 master-0 kubenswrapper[4409]: I1203 14:31:24.001389 4409 scope.go:117] "RemoveContainer" containerID="56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37" Dec 03 14:31:24.001651 master-0 kubenswrapper[4409]: E1203 14:31:24.001618 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37\": container with ID starting with 56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37 not found: ID does not exist" containerID="56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37" Dec 03 14:31:24.001739 master-0 kubenswrapper[4409]: I1203 14:31:24.001648 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37"} err="failed to get container status \"56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37\": rpc error: code = NotFound desc = could not find container \"56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37\": container with ID starting with 56e749818dba9fdb7fae97748af2bc4d08be2c6e2600da6f35b2f2d950a00e37 not found: ID does not exist" Dec 03 14:31:24.895953 master-0 kubenswrapper[4409]: I1203 14:31:24.895871 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:24.896559 master-0 kubenswrapper[4409]: I1203 14:31:24.896315 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5msvs" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="registry-server" containerID="cri-o://207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2" gracePeriod=2 Dec 03 14:31:25.308033 master-0 kubenswrapper[4409]: I1203 14:31:25.307967 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:25.451297 master-0 kubenswrapper[4409]: I1203 14:31:25.451232 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content\") pod \"97b4a621-a709-4223-bb72-7c6dd9d63623\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " Dec 03 14:31:25.451585 master-0 kubenswrapper[4409]: I1203 14:31:25.451396 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hnv8\" (UniqueName: \"kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8\") pod \"97b4a621-a709-4223-bb72-7c6dd9d63623\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " Dec 03 14:31:25.451585 master-0 kubenswrapper[4409]: I1203 14:31:25.451443 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities\") pod \"97b4a621-a709-4223-bb72-7c6dd9d63623\" (UID: \"97b4a621-a709-4223-bb72-7c6dd9d63623\") " Dec 03 14:31:25.452472 master-0 kubenswrapper[4409]: I1203 14:31:25.452424 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities" (OuterVolumeSpecName: "utilities") pod "97b4a621-a709-4223-bb72-7c6dd9d63623" (UID: "97b4a621-a709-4223-bb72-7c6dd9d63623"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:25.454573 master-0 kubenswrapper[4409]: I1203 14:31:25.454533 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8" (OuterVolumeSpecName: "kube-api-access-8hnv8") pod "97b4a621-a709-4223-bb72-7c6dd9d63623" (UID: "97b4a621-a709-4223-bb72-7c6dd9d63623"). InnerVolumeSpecName "kube-api-access-8hnv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:25.511324 master-0 kubenswrapper[4409]: I1203 14:31:25.501065 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97b4a621-a709-4223-bb72-7c6dd9d63623" (UID: "97b4a621-a709-4223-bb72-7c6dd9d63623"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:31:25.552762 master-0 kubenswrapper[4409]: I1203 14:31:25.552697 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:25.552762 master-0 kubenswrapper[4409]: I1203 14:31:25.552736 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hnv8\" (UniqueName: \"kubernetes.io/projected/97b4a621-a709-4223-bb72-7c6dd9d63623-kube-api-access-8hnv8\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:25.552762 master-0 kubenswrapper[4409]: I1203 14:31:25.552749 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97b4a621-a709-4223-bb72-7c6dd9d63623-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:25.827399 master-0 kubenswrapper[4409]: I1203 14:31:25.827349 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" path="/var/lib/kubelet/pods/5e8aa1b1-5bf2-4763-9338-ef927345f786/volumes" Dec 03 14:31:25.955114 master-0 kubenswrapper[4409]: I1203 14:31:25.955056 4409 generic.go:334] "Generic (PLEG): container finished" podID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerID="207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2" exitCode=0 Dec 03 14:31:25.955114 master-0 kubenswrapper[4409]: I1203 14:31:25.955112 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerDied","Data":"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2"} Dec 03 14:31:25.955680 master-0 kubenswrapper[4409]: I1203 14:31:25.955153 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5msvs" event={"ID":"97b4a621-a709-4223-bb72-7c6dd9d63623","Type":"ContainerDied","Data":"0324123106df7539cf7bdb24353bb930a8eb4fbe914b1a6f91adab2b238d4ce4"} Dec 03 14:31:25.955680 master-0 kubenswrapper[4409]: I1203 14:31:25.955182 4409 scope.go:117] "RemoveContainer" containerID="207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2" Dec 03 14:31:25.955680 master-0 kubenswrapper[4409]: I1203 14:31:25.955274 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5msvs" Dec 03 14:31:25.977394 master-0 kubenswrapper[4409]: I1203 14:31:25.977346 4409 scope.go:117] "RemoveContainer" containerID="998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30" Dec 03 14:31:25.991606 master-0 kubenswrapper[4409]: I1203 14:31:25.991544 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:25.998231 master-0 kubenswrapper[4409]: I1203 14:31:25.998194 4409 scope.go:117] "RemoveContainer" containerID="3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8" Dec 03 14:31:26.000131 master-0 kubenswrapper[4409]: I1203 14:31:26.000067 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5msvs"] Dec 03 14:31:26.014431 master-0 kubenswrapper[4409]: I1203 14:31:26.014388 4409 scope.go:117] "RemoveContainer" containerID="207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2" Dec 03 14:31:26.014961 master-0 kubenswrapper[4409]: E1203 14:31:26.014922 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2\": container with ID starting with 207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2 not found: ID does not exist" containerID="207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2" Dec 03 14:31:26.015052 master-0 kubenswrapper[4409]: I1203 14:31:26.014989 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2"} err="failed to get container status \"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2\": rpc error: code = NotFound desc = could not find container \"207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2\": container with ID starting with 207ea88c019bac8d315f234dd1310928daf47878e3be7c7717bf00fbf4450bc2 not found: ID does not exist" Dec 03 14:31:26.015124 master-0 kubenswrapper[4409]: I1203 14:31:26.015058 4409 scope.go:117] "RemoveContainer" containerID="998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30" Dec 03 14:31:26.015572 master-0 kubenswrapper[4409]: E1203 14:31:26.015536 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30\": container with ID starting with 998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30 not found: ID does not exist" containerID="998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30" Dec 03 14:31:26.015631 master-0 kubenswrapper[4409]: I1203 14:31:26.015570 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30"} err="failed to get container status \"998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30\": rpc error: code = NotFound desc = could not find container \"998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30\": container with ID starting with 998454e27f7aa005f5737fa996b99166c8c172a6a625b22e548856beaa7e2d30 not found: ID does not exist" Dec 03 14:31:26.015631 master-0 kubenswrapper[4409]: I1203 14:31:26.015599 4409 scope.go:117] "RemoveContainer" containerID="3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8" Dec 03 14:31:26.015961 master-0 kubenswrapper[4409]: E1203 14:31:26.015931 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8\": container with ID starting with 3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8 not found: ID does not exist" containerID="3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8" Dec 03 14:31:26.016097 master-0 kubenswrapper[4409]: I1203 14:31:26.016066 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8"} err="failed to get container status \"3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8\": rpc error: code = NotFound desc = could not find container \"3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8\": container with ID starting with 3d3bf710494dcf0a5c3ff56ee99decec6c22ab8b407c8657b3fbb00f6a5bf3d8 not found: ID does not exist" Dec 03 14:31:27.824757 master-0 kubenswrapper[4409]: I1203 14:31:27.824681 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" path="/var/lib/kubelet/pods/97b4a621-a709-4223-bb72-7c6dd9d63623/volumes" Dec 03 14:31:37.424735 master-0 kubenswrapper[4409]: I1203 14:31:37.424676 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c9c84854-xf7nv" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" containerID="cri-o://5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d" gracePeriod=15 Dec 03 14:31:37.836401 master-0 kubenswrapper[4409]: I1203 14:31:37.836350 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c9c84854-xf7nv_8b442f72-b113-4227-93b5-ea1ae90d5154/console/2.log" Dec 03 14:31:37.836617 master-0 kubenswrapper[4409]: I1203 14:31:37.836430 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:31:37.912665 master-0 kubenswrapper[4409]: I1203 14:31:37.912626 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.912982 master-0 kubenswrapper[4409]: I1203 14:31:37.912963 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913180 master-0 kubenswrapper[4409]: I1203 14:31:37.913166 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913345 master-0 kubenswrapper[4409]: I1203 14:31:37.913330 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913444 master-0 kubenswrapper[4409]: I1203 14:31:37.913432 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913573 master-0 kubenswrapper[4409]: I1203 14:31:37.913552 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913744 master-0 kubenswrapper[4409]: I1203 14:31:37.913726 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") pod \"8b442f72-b113-4227-93b5-ea1ae90d5154\" (UID: \"8b442f72-b113-4227-93b5-ea1ae90d5154\") " Dec 03 14:31:37.913956 master-0 kubenswrapper[4409]: I1203 14:31:37.913769 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:31:37.914050 master-0 kubenswrapper[4409]: I1203 14:31:37.913983 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:31:37.914178 master-0 kubenswrapper[4409]: I1203 14:31:37.914147 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config" (OuterVolumeSpecName: "console-config") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:31:37.914280 master-0 kubenswrapper[4409]: I1203 14:31:37.914226 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca" (OuterVolumeSpecName: "service-ca") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:31:37.914515 master-0 kubenswrapper[4409]: I1203 14:31:37.914493 4409 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:37.914611 master-0 kubenswrapper[4409]: I1203 14:31:37.914599 4409 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:37.914683 master-0 kubenswrapper[4409]: I1203 14:31:37.914673 4409 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-console-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:37.914758 master-0 kubenswrapper[4409]: I1203 14:31:37.914747 4409 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8b442f72-b113-4227-93b5-ea1ae90d5154-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:37.915950 master-0 kubenswrapper[4409]: I1203 14:31:37.915915 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:31:37.916566 master-0 kubenswrapper[4409]: I1203 14:31:37.916535 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:31:37.917155 master-0 kubenswrapper[4409]: I1203 14:31:37.917114 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn" (OuterVolumeSpecName: "kube-api-access-d8bbn") pod "8b442f72-b113-4227-93b5-ea1ae90d5154" (UID: "8b442f72-b113-4227-93b5-ea1ae90d5154"). InnerVolumeSpecName "kube-api-access-d8bbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:31:38.016261 master-0 kubenswrapper[4409]: I1203 14:31:38.016197 4409 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:38.016563 master-0 kubenswrapper[4409]: I1203 14:31:38.016551 4409 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b442f72-b113-4227-93b5-ea1ae90d5154-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:38.016645 master-0 kubenswrapper[4409]: I1203 14:31:38.016634 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8bbn\" (UniqueName: \"kubernetes.io/projected/8b442f72-b113-4227-93b5-ea1ae90d5154-kube-api-access-d8bbn\") on node \"master-0\" DevicePath \"\"" Dec 03 14:31:38.051322 master-0 kubenswrapper[4409]: I1203 14:31:38.051180 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c9c84854-xf7nv_8b442f72-b113-4227-93b5-ea1ae90d5154/console/2.log" Dec 03 14:31:38.051322 master-0 kubenswrapper[4409]: I1203 14:31:38.051243 4409 generic.go:334] "Generic (PLEG): container finished" podID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerID="5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d" exitCode=2 Dec 03 14:31:38.051322 master-0 kubenswrapper[4409]: I1203 14:31:38.051284 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerDied","Data":"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d"} Dec 03 14:31:38.051322 master-0 kubenswrapper[4409]: I1203 14:31:38.051318 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c9c84854-xf7nv" event={"ID":"8b442f72-b113-4227-93b5-ea1ae90d5154","Type":"ContainerDied","Data":"1d42f7d5d372f9a3877b3b45dcf0372cf75642272766e45ab80af733ac039158"} Dec 03 14:31:38.051693 master-0 kubenswrapper[4409]: I1203 14:31:38.051338 4409 scope.go:117] "RemoveContainer" containerID="5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d" Dec 03 14:31:38.051693 master-0 kubenswrapper[4409]: I1203 14:31:38.051381 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c9c84854-xf7nv" Dec 03 14:31:38.069332 master-0 kubenswrapper[4409]: I1203 14:31:38.068443 4409 scope.go:117] "RemoveContainer" containerID="5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d" Dec 03 14:31:38.069332 master-0 kubenswrapper[4409]: E1203 14:31:38.069242 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d\": container with ID starting with 5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d not found: ID does not exist" containerID="5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d" Dec 03 14:31:38.069617 master-0 kubenswrapper[4409]: I1203 14:31:38.069346 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d"} err="failed to get container status \"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d\": rpc error: code = NotFound desc = could not find container \"5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d\": container with ID starting with 5b48a0bb29f9eb52a127de6b2fadddfee61841683388c90548eeaeed218afe8d not found: ID does not exist" Dec 03 14:31:38.115469 master-0 kubenswrapper[4409]: I1203 14:31:38.115321 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:31:38.126107 master-0 kubenswrapper[4409]: I1203 14:31:38.126026 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6c9c84854-xf7nv"] Dec 03 14:31:39.824122 master-0 kubenswrapper[4409]: I1203 14:31:39.824054 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" path="/var/lib/kubelet/pods/8b442f72-b113-4227-93b5-ea1ae90d5154/volumes" Dec 03 14:32:04.796060 master-0 kubenswrapper[4409]: I1203 14:32:04.795876 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:32:04.797113 master-0 kubenswrapper[4409]: I1203 14:32:04.796071 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:32:07.944123 master-0 kubenswrapper[4409]: I1203 14:32:07.944027 4409 scope.go:117] "RemoveContainer" containerID="6eae176e0ae8fefc0bdfeffe3c926861eedc7d77177bb0a1c542bb03d7b718af" Dec 03 14:32:07.981558 master-0 kubenswrapper[4409]: I1203 14:32:07.981502 4409 scope.go:117] "RemoveContainer" containerID="2d61d8802bbc570d04dd9977fb07dd6294b8212bfe0e7176af3f6ce67f85ee5a" Dec 03 14:32:08.007718 master-0 kubenswrapper[4409]: I1203 14:32:08.007651 4409 scope.go:117] "RemoveContainer" containerID="d0a827a444c38d75c515a416cb1a917a642fb70a7523efb53087345e0bb3e2ee" Dec 03 14:32:08.026471 master-0 kubenswrapper[4409]: I1203 14:32:08.026415 4409 scope.go:117] "RemoveContainer" containerID="05a2610f6bca4defc9b7ede8255a1c063ebe53f7d07ab7227fcf2edbc056b331" Dec 03 14:32:08.132687 master-0 kubenswrapper[4409]: E1203 14:32:08.132555 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae\": container with ID starting with 7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae not found: ID does not exist" containerID="7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae" Dec 03 14:32:08.132687 master-0 kubenswrapper[4409]: I1203 14:32:08.132711 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae" err="rpc error: code = NotFound desc = could not find container \"7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae\": container with ID starting with 7ee0080086ff4c68ffea3b6986f27c5aca0f9d49556379b0cd056cc069feb1ae not found: ID does not exist" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.455976 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456410 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456427 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456441 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456447 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456465 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456472 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456480 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456486 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456498 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456504 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456517 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456525 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456535 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce0878b7-82ed-4efb-b984-38ad8fd8f185" containerName="collect-profiles" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456541 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce0878b7-82ed-4efb-b984-38ad8fd8f185" containerName="collect-profiles" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456550 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456556 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456566 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456572 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456580 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456586 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="extract-content" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456599 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456605 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="extract-utilities" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456618 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456624 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456631 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456636 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: E1203 14:32:11.456645 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456651 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456782 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="2526e28d-9fea-4b08-a5da-599f7fb81b1e" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456794 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b442f72-b113-4227-93b5-ea1ae90d5154" containerName="console" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456805 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f59cd6-2b04-4124-b4f3-e97bd3684f57" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456814 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="97b4a621-a709-4223-bb72-7c6dd9d63623" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456833 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce0878b7-82ed-4efb-b984-38ad8fd8f185" containerName="collect-profiles" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.456841 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8aa1b1-5bf2-4763-9338-ef927345f786" containerName="registry-server" Dec 03 14:32:11.458056 master-0 kubenswrapper[4409]: I1203 14:32:11.457405 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.460613 master-0 kubenswrapper[4409]: I1203 14:32:11.460460 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 03 14:32:11.460978 master-0 kubenswrapper[4409]: I1203 14:32:11.460859 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-4xstp" Dec 03 14:32:11.472265 master-0 kubenswrapper[4409]: I1203 14:32:11.472200 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Dec 03 14:32:11.557214 master-0 kubenswrapper[4409]: I1203 14:32:11.557067 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.557513 master-0 kubenswrapper[4409]: I1203 14:32:11.557229 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.557513 master-0 kubenswrapper[4409]: I1203 14:32:11.557332 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.660182 master-0 kubenswrapper[4409]: I1203 14:32:11.660069 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.660497 master-0 kubenswrapper[4409]: I1203 14:32:11.660225 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.660497 master-0 kubenswrapper[4409]: I1203 14:32:11.660401 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.660497 master-0 kubenswrapper[4409]: I1203 14:32:11.660446 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.660681 master-0 kubenswrapper[4409]: I1203 14:32:11.660490 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.676806 master-0 kubenswrapper[4409]: I1203 14:32:11.676745 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access\") pod \"installer-4-master-0\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:11.779691 master-0 kubenswrapper[4409]: I1203 14:32:11.779565 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:12.191302 master-0 kubenswrapper[4409]: W1203 14:32:12.191230 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod237f0597_086e_4a5a_a120_59232c16219e.slice/crio-ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae WatchSource:0}: Error finding container ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae: Status 404 returned error can't find the container with id ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae Dec 03 14:32:12.191489 master-0 kubenswrapper[4409]: I1203 14:32:12.191282 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Dec 03 14:32:12.300752 master-0 kubenswrapper[4409]: I1203 14:32:12.300689 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"237f0597-086e-4a5a-a120-59232c16219e","Type":"ContainerStarted","Data":"ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae"} Dec 03 14:32:13.309839 master-0 kubenswrapper[4409]: I1203 14:32:13.309759 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"237f0597-086e-4a5a-a120-59232c16219e","Type":"ContainerStarted","Data":"bc2dc5a99c0629b7c8022cb3432a021bf0487b146e7d00e47fc28b872ba0a0ea"} Dec 03 14:32:13.345360 master-0 kubenswrapper[4409]: I1203 14:32:13.345238 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.345205717 podStartE2EDuration="2.345205717s" podCreationTimestamp="2025-12-03 14:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:32:13.332050332 +0000 UTC m=+365.659112838" watchObservedRunningTime="2025-12-03 14:32:13.345205717 +0000 UTC m=+365.672268223" Dec 03 14:32:20.693420 master-0 kubenswrapper[4409]: I1203 14:32:20.693355 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-4c6x2"] Dec 03 14:32:20.694408 master-0 kubenswrapper[4409]: I1203 14:32:20.694383 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.697178 master-0 kubenswrapper[4409]: I1203 14:32:20.697138 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 03 14:32:20.719478 master-0 kubenswrapper[4409]: I1203 14:32:20.719405 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.719771 master-0 kubenswrapper[4409]: I1203 14:32:20.719509 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.719771 master-0 kubenswrapper[4409]: I1203 14:32:20.719551 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vggv2\" (UniqueName: \"kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.719771 master-0 kubenswrapper[4409]: I1203 14:32:20.719582 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.820973 master-0 kubenswrapper[4409]: I1203 14:32:20.820906 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.820973 master-0 kubenswrapper[4409]: I1203 14:32:20.820971 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.821318 master-0 kubenswrapper[4409]: I1203 14:32:20.821024 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vggv2\" (UniqueName: \"kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.821318 master-0 kubenswrapper[4409]: I1203 14:32:20.821086 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.821318 master-0 kubenswrapper[4409]: I1203 14:32:20.821165 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.821750 master-0 kubenswrapper[4409]: I1203 14:32:20.821723 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.821818 master-0 kubenswrapper[4409]: I1203 14:32:20.821784 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:20.838335 master-0 kubenswrapper[4409]: I1203 14:32:20.838271 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vggv2\" (UniqueName: \"kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2\") pod \"cni-sysctl-allowlist-ds-4c6x2\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:21.013679 master-0 kubenswrapper[4409]: I1203 14:32:21.013491 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:21.033636 master-0 kubenswrapper[4409]: W1203 14:32:21.033439 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43a808e0_babc_45b3_b69f_12cc77801356.slice/crio-a63587d386e5ae4706628f77228c26fea32fd749453faf6f5499d2740534ff4f WatchSource:0}: Error finding container a63587d386e5ae4706628f77228c26fea32fd749453faf6f5499d2740534ff4f: Status 404 returned error can't find the container with id a63587d386e5ae4706628f77228c26fea32fd749453faf6f5499d2740534ff4f Dec 03 14:32:21.369928 master-0 kubenswrapper[4409]: I1203 14:32:21.369865 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" event={"ID":"43a808e0-babc-45b3-b69f-12cc77801356","Type":"ContainerStarted","Data":"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785"} Dec 03 14:32:21.369928 master-0 kubenswrapper[4409]: I1203 14:32:21.369921 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" event={"ID":"43a808e0-babc-45b3-b69f-12cc77801356","Type":"ContainerStarted","Data":"a63587d386e5ae4706628f77228c26fea32fd749453faf6f5499d2740534ff4f"} Dec 03 14:32:21.370701 master-0 kubenswrapper[4409]: I1203 14:32:21.370669 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:21.390791 master-0 kubenswrapper[4409]: I1203 14:32:21.390704 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" podStartSLOduration=1.390682044 podStartE2EDuration="1.390682044s" podCreationTimestamp="2025-12-03 14:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:32:21.388738718 +0000 UTC m=+373.715801234" watchObservedRunningTime="2025-12-03 14:32:21.390682044 +0000 UTC m=+373.717744550" Dec 03 14:32:22.403321 master-0 kubenswrapper[4409]: I1203 14:32:22.403267 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:22.645111 master-0 kubenswrapper[4409]: I1203 14:32:22.645044 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-4c6x2"] Dec 03 14:32:24.395594 master-0 kubenswrapper[4409]: I1203 14:32:24.395246 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" gracePeriod=30 Dec 03 14:32:30.228918 master-0 kubenswrapper[4409]: I1203 14:32:30.228835 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-574cbf778d-hr92j"] Dec 03 14:32:30.231415 master-0 kubenswrapper[4409]: I1203 14:32:30.231378 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.242710 master-0 kubenswrapper[4409]: I1203 14:32:30.242644 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-574cbf778d-hr92j"] Dec 03 14:32:30.279228 master-0 kubenswrapper[4409]: I1203 14:32:30.279133 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d697c6e8-a274-48f2-8607-db5562a06626-webhook-certs\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.279228 master-0 kubenswrapper[4409]: I1203 14:32:30.279218 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2tgj\" (UniqueName: \"kubernetes.io/projected/d697c6e8-a274-48f2-8607-db5562a06626-kube-api-access-h2tgj\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.381649 master-0 kubenswrapper[4409]: I1203 14:32:30.381574 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d697c6e8-a274-48f2-8607-db5562a06626-webhook-certs\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.381649 master-0 kubenswrapper[4409]: I1203 14:32:30.381652 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2tgj\" (UniqueName: \"kubernetes.io/projected/d697c6e8-a274-48f2-8607-db5562a06626-kube-api-access-h2tgj\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.386695 master-0 kubenswrapper[4409]: I1203 14:32:30.386627 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d697c6e8-a274-48f2-8607-db5562a06626-webhook-certs\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.400070 master-0 kubenswrapper[4409]: I1203 14:32:30.399975 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2tgj\" (UniqueName: \"kubernetes.io/projected/d697c6e8-a274-48f2-8607-db5562a06626-kube-api-access-h2tgj\") pod \"multus-admission-controller-574cbf778d-hr92j\" (UID: \"d697c6e8-a274-48f2-8607-db5562a06626\") " pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.555043 master-0 kubenswrapper[4409]: I1203 14:32:30.554964 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" Dec 03 14:32:30.986694 master-0 kubenswrapper[4409]: I1203 14:32:30.986630 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-574cbf778d-hr92j"] Dec 03 14:32:30.987332 master-0 kubenswrapper[4409]: W1203 14:32:30.987254 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd697c6e8_a274_48f2_8607_db5562a06626.slice/crio-03e36e3ac0d90e85b3beb6d02aa60762ac3fe6e34ed3ccbef4e82bcb22d00f7d WatchSource:0}: Error finding container 03e36e3ac0d90e85b3beb6d02aa60762ac3fe6e34ed3ccbef4e82bcb22d00f7d: Status 404 returned error can't find the container with id 03e36e3ac0d90e85b3beb6d02aa60762ac3fe6e34ed3ccbef4e82bcb22d00f7d Dec 03 14:32:31.024551 master-0 kubenswrapper[4409]: E1203 14:32:31.024425 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:31.025813 master-0 kubenswrapper[4409]: E1203 14:32:31.025761 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:31.026851 master-0 kubenswrapper[4409]: E1203 14:32:31.026779 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:31.026974 master-0 kubenswrapper[4409]: E1203 14:32:31.026868 4409 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:32:31.443354 master-0 kubenswrapper[4409]: I1203 14:32:31.443289 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" event={"ID":"d697c6e8-a274-48f2-8607-db5562a06626","Type":"ContainerStarted","Data":"974b844cc27a6fa99fb8da023c4b8d31114a55b5b56c8c0a8089cfdb0c002a53"} Dec 03 14:32:31.443354 master-0 kubenswrapper[4409]: I1203 14:32:31.443342 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" event={"ID":"d697c6e8-a274-48f2-8607-db5562a06626","Type":"ContainerStarted","Data":"d5d561001139d6d9b54a1610d5f43d392bb259bbff6965a7a22a15b12933835d"} Dec 03 14:32:31.443354 master-0 kubenswrapper[4409]: I1203 14:32:31.443352 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" event={"ID":"d697c6e8-a274-48f2-8607-db5562a06626","Type":"ContainerStarted","Data":"03e36e3ac0d90e85b3beb6d02aa60762ac3fe6e34ed3ccbef4e82bcb22d00f7d"} Dec 03 14:32:31.467515 master-0 kubenswrapper[4409]: I1203 14:32:31.467378 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-574cbf778d-hr92j" podStartSLOduration=1.467252671 podStartE2EDuration="1.467252671s" podCreationTimestamp="2025-12-03 14:32:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:32:31.465377377 +0000 UTC m=+383.792439903" watchObservedRunningTime="2025-12-03 14:32:31.467252671 +0000 UTC m=+383.794315197" Dec 03 14:32:31.531149 master-0 kubenswrapper[4409]: I1203 14:32:31.530961 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:32:31.531403 master-0 kubenswrapper[4409]: I1203 14:32:31.531256 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="multus-admission-controller" containerID="cri-o://f707b5a65c5a5509b5382c713002ab668a4590bc8b8a861cec9c1fbd38881498" gracePeriod=30 Dec 03 14:32:31.531403 master-0 kubenswrapper[4409]: I1203 14:32:31.531397 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="kube-rbac-proxy" containerID="cri-o://87732635fbd1d41343c81f9da0ba28c1db41ebda9b9ab9b235767cc746eb95c8" gracePeriod=30 Dec 03 14:32:33.464947 master-0 kubenswrapper[4409]: I1203 14:32:33.464878 4409 generic.go:334] "Generic (PLEG): container finished" podID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerID="87732635fbd1d41343c81f9da0ba28c1db41ebda9b9ab9b235767cc746eb95c8" exitCode=0 Dec 03 14:32:33.464947 master-0 kubenswrapper[4409]: I1203 14:32:33.464935 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerDied","Data":"87732635fbd1d41343c81f9da0ba28c1db41ebda9b9ab9b235767cc746eb95c8"} Dec 03 14:32:34.795732 master-0 kubenswrapper[4409]: I1203 14:32:34.795570 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:32:34.795732 master-0 kubenswrapper[4409]: I1203 14:32:34.795636 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:32:41.016821 master-0 kubenswrapper[4409]: E1203 14:32:41.016694 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:41.018715 master-0 kubenswrapper[4409]: E1203 14:32:41.018631 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:41.020078 master-0 kubenswrapper[4409]: E1203 14:32:41.020021 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:41.020202 master-0 kubenswrapper[4409]: E1203 14:32:41.020076 4409 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:32:45.361708 master-0 kubenswrapper[4409]: I1203 14:32:45.361569 4409 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:32:45.362342 master-0 kubenswrapper[4409]: I1203 14:32:45.361870 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" containerID="cri-o://3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0" gracePeriod=30 Dec 03 14:32:45.362342 master-0 kubenswrapper[4409]: I1203 14:32:45.361895 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" containerID="cri-o://3d63c97e8181b8a5a98730093a5cc984581455dd5e6126329dda21ffe29cf740" gracePeriod=30 Dec 03 14:32:45.362342 master-0 kubenswrapper[4409]: I1203 14:32:45.361987 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46" gracePeriod=30 Dec 03 14:32:45.362342 master-0 kubenswrapper[4409]: I1203 14:32:45.362061 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860" gracePeriod=30 Dec 03 14:32:45.363080 master-0 kubenswrapper[4409]: I1203 14:32:45.362951 4409 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:32:45.363305 master-0 kubenswrapper[4409]: E1203 14:32:45.363272 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-cert-syncer" Dec 03 14:32:45.363305 master-0 kubenswrapper[4409]: I1203 14:32:45.363293 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-cert-syncer" Dec 03 14:32:45.363305 master-0 kubenswrapper[4409]: E1203 14:32:45.363303 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" Dec 03 14:32:45.363305 master-0 kubenswrapper[4409]: I1203 14:32:45.363309 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: E1203 14:32:45.363323 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363330 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: E1203 14:32:45.363349 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363355 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: E1203 14:32:45.363363 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-recovery-controller" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363369 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-recovery-controller" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363496 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-recovery-controller" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363509 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363521 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager" Dec 03 14:32:45.363524 master-0 kubenswrapper[4409]: I1203 14:32:45.363532 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="kube-controller-manager-cert-syncer" Dec 03 14:32:45.363982 master-0 kubenswrapper[4409]: I1203 14:32:45.363557 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1dbec7c25a38180c3a6691040eb5a8" containerName="cluster-policy-controller" Dec 03 14:32:45.524796 master-0 kubenswrapper[4409]: I1203 14:32:45.524743 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.524796 master-0 kubenswrapper[4409]: I1203 14:32:45.524799 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.561308 master-0 kubenswrapper[4409]: I1203 14:32:45.561260 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/1.log" Dec 03 14:32:45.565090 master-0 kubenswrapper[4409]: I1203 14:32:45.564422 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager/2.log" Dec 03 14:32:45.565090 master-0 kubenswrapper[4409]: I1203 14:32:45.564551 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.567417 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/1.log" Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568184 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager/2.log" Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568229 4409 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="3d63c97e8181b8a5a98730093a5cc984581455dd5e6126329dda21ffe29cf740" exitCode=0 Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568248 4409 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860" exitCode=0 Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568257 4409 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46" exitCode=2 Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568265 4409 generic.go:334] "Generic (PLEG): container finished" podID="bf1dbec7c25a38180c3a6691040eb5a8" containerID="3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0" exitCode=0 Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568302 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa82110b9b869b53bf054ba329f2ff0d9b09b2389bd27a64908fd39c82a1a095" Dec 03 14:32:45.568721 master-0 kubenswrapper[4409]: I1203 14:32:45.568319 4409 scope.go:117] "RemoveContainer" containerID="0ed71d197ff0d9c0bde7e69f37a2b26879fcadaecb81238b68003372da793636" Dec 03 14:32:45.569277 master-0 kubenswrapper[4409]: I1203 14:32:45.569141 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="bf1dbec7c25a38180c3a6691040eb5a8" podUID="0ac3cb6f0e48ec17cf6a6842aa7feeff" Dec 03 14:32:45.626741 master-0 kubenswrapper[4409]: I1203 14:32:45.626594 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.626741 master-0 kubenswrapper[4409]: I1203 14:32:45.626662 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.626741 master-0 kubenswrapper[4409]: I1203 14:32:45.626727 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.627152 master-0 kubenswrapper[4409]: I1203 14:32:45.626780 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0ac3cb6f0e48ec17cf6a6842aa7feeff-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"0ac3cb6f0e48ec17cf6a6842aa7feeff\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:45.727740 master-0 kubenswrapper[4409]: I1203 14:32:45.727685 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") pod \"bf1dbec7c25a38180c3a6691040eb5a8\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " Dec 03 14:32:45.727977 master-0 kubenswrapper[4409]: I1203 14:32:45.727766 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") pod \"bf1dbec7c25a38180c3a6691040eb5a8\" (UID: \"bf1dbec7c25a38180c3a6691040eb5a8\") " Dec 03 14:32:45.727977 master-0 kubenswrapper[4409]: I1203 14:32:45.727848 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "bf1dbec7c25a38180c3a6691040eb5a8" (UID: "bf1dbec7c25a38180c3a6691040eb5a8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:32:45.727977 master-0 kubenswrapper[4409]: I1203 14:32:45.727906 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "bf1dbec7c25a38180c3a6691040eb5a8" (UID: "bf1dbec7c25a38180c3a6691040eb5a8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:32:45.728535 master-0 kubenswrapper[4409]: I1203 14:32:45.728488 4409 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-cert-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:45.728535 master-0 kubenswrapper[4409]: I1203 14:32:45.728514 4409 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/bf1dbec7c25a38180c3a6691040eb5a8-resource-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:45.825181 master-0 kubenswrapper[4409]: I1203 14:32:45.825116 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1dbec7c25a38180c3a6691040eb5a8" path="/var/lib/kubelet/pods/bf1dbec7c25a38180c3a6691040eb5a8/volumes" Dec 03 14:32:46.576335 master-0 kubenswrapper[4409]: I1203 14:32:46.576290 4409 generic.go:334] "Generic (PLEG): container finished" podID="237f0597-086e-4a5a-a120-59232c16219e" containerID="bc2dc5a99c0629b7c8022cb3432a021bf0487b146e7d00e47fc28b872ba0a0ea" exitCode=0 Dec 03 14:32:46.577044 master-0 kubenswrapper[4409]: I1203 14:32:46.576371 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"237f0597-086e-4a5a-a120-59232c16219e","Type":"ContainerDied","Data":"bc2dc5a99c0629b7c8022cb3432a021bf0487b146e7d00e47fc28b872ba0a0ea"} Dec 03 14:32:46.580078 master-0 kubenswrapper[4409]: I1203 14:32:46.580045 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_bf1dbec7c25a38180c3a6691040eb5a8/kube-controller-manager-cert-syncer/1.log" Dec 03 14:32:46.582297 master-0 kubenswrapper[4409]: I1203 14:32:46.582255 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:46.599389 master-0 kubenswrapper[4409]: I1203 14:32:46.599307 4409 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="bf1dbec7c25a38180c3a6691040eb5a8" podUID="0ac3cb6f0e48ec17cf6a6842aa7feeff" Dec 03 14:32:47.857633 master-0 kubenswrapper[4409]: I1203 14:32:47.857478 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:47.962501 master-0 kubenswrapper[4409]: I1203 14:32:47.961915 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access\") pod \"237f0597-086e-4a5a-a120-59232c16219e\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " Dec 03 14:32:47.962809 master-0 kubenswrapper[4409]: I1203 14:32:47.962675 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock\") pod \"237f0597-086e-4a5a-a120-59232c16219e\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " Dec 03 14:32:47.962809 master-0 kubenswrapper[4409]: I1203 14:32:47.962730 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir\") pod \"237f0597-086e-4a5a-a120-59232c16219e\" (UID: \"237f0597-086e-4a5a-a120-59232c16219e\") " Dec 03 14:32:47.962965 master-0 kubenswrapper[4409]: I1203 14:32:47.962866 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock" (OuterVolumeSpecName: "var-lock") pod "237f0597-086e-4a5a-a120-59232c16219e" (UID: "237f0597-086e-4a5a-a120-59232c16219e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:32:47.962965 master-0 kubenswrapper[4409]: I1203 14:32:47.962919 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "237f0597-086e-4a5a-a120-59232c16219e" (UID: "237f0597-086e-4a5a-a120-59232c16219e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:32:47.963172 master-0 kubenswrapper[4409]: I1203 14:32:47.963142 4409 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:47.963172 master-0 kubenswrapper[4409]: I1203 14:32:47.963166 4409 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/237f0597-086e-4a5a-a120-59232c16219e-var-lock\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:47.965344 master-0 kubenswrapper[4409]: I1203 14:32:47.965278 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "237f0597-086e-4a5a-a120-59232c16219e" (UID: "237f0597-086e-4a5a-a120-59232c16219e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:32:48.064835 master-0 kubenswrapper[4409]: I1203 14:32:48.064710 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/237f0597-086e-4a5a-a120-59232c16219e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:48.601543 master-0 kubenswrapper[4409]: I1203 14:32:48.601463 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"237f0597-086e-4a5a-a120-59232c16219e","Type":"ContainerDied","Data":"ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae"} Dec 03 14:32:48.601543 master-0 kubenswrapper[4409]: I1203 14:32:48.601544 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ece8cfc9030e14479c3103f6b5d662d5b8376bcf09e7b76253415de6bcb9acae" Dec 03 14:32:48.601834 master-0 kubenswrapper[4409]: I1203 14:32:48.601677 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Dec 03 14:32:51.015966 master-0 kubenswrapper[4409]: E1203 14:32:51.015876 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:51.017292 master-0 kubenswrapper[4409]: E1203 14:32:51.017217 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:51.018684 master-0 kubenswrapper[4409]: E1203 14:32:51.018600 4409 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 03 14:32:51.018684 master-0 kubenswrapper[4409]: E1203 14:32:51.018651 4409 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:32:54.508512 master-0 kubenswrapper[4409]: I1203 14:32:54.508456 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-4c6x2_43a808e0-babc-45b3-b69f-12cc77801356/kube-multus-additional-cni-plugins/0.log" Dec 03 14:32:54.509232 master-0 kubenswrapper[4409]: I1203 14:32:54.508536 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:54.642268 master-0 kubenswrapper[4409]: I1203 14:32:54.642200 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-4c6x2_43a808e0-babc-45b3-b69f-12cc77801356/kube-multus-additional-cni-plugins/0.log" Dec 03 14:32:54.642562 master-0 kubenswrapper[4409]: I1203 14:32:54.642292 4409 generic.go:334] "Generic (PLEG): container finished" podID="43a808e0-babc-45b3-b69f-12cc77801356" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" exitCode=137 Dec 03 14:32:54.642562 master-0 kubenswrapper[4409]: I1203 14:32:54.642346 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" event={"ID":"43a808e0-babc-45b3-b69f-12cc77801356","Type":"ContainerDied","Data":"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785"} Dec 03 14:32:54.642562 master-0 kubenswrapper[4409]: I1203 14:32:54.642379 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" event={"ID":"43a808e0-babc-45b3-b69f-12cc77801356","Type":"ContainerDied","Data":"a63587d386e5ae4706628f77228c26fea32fd749453faf6f5499d2740534ff4f"} Dec 03 14:32:54.642562 master-0 kubenswrapper[4409]: I1203 14:32:54.642400 4409 scope.go:117] "RemoveContainer" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" Dec 03 14:32:54.642562 master-0 kubenswrapper[4409]: I1203 14:32:54.642397 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-4c6x2" Dec 03 14:32:54.660230 master-0 kubenswrapper[4409]: I1203 14:32:54.660051 4409 scope.go:117] "RemoveContainer" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" Dec 03 14:32:54.660653 master-0 kubenswrapper[4409]: E1203 14:32:54.660594 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785\": container with ID starting with 0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785 not found: ID does not exist" containerID="0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785" Dec 03 14:32:54.660745 master-0 kubenswrapper[4409]: I1203 14:32:54.660668 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785"} err="failed to get container status \"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785\": rpc error: code = NotFound desc = could not find container \"0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785\": container with ID starting with 0a3d430e2835a6a8cb5bf26481d4e110efa1b9044fe19b70d88a9a372b6a7785 not found: ID does not exist" Dec 03 14:32:54.665294 master-0 kubenswrapper[4409]: I1203 14:32:54.665227 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vggv2\" (UniqueName: \"kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2\") pod \"43a808e0-babc-45b3-b69f-12cc77801356\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " Dec 03 14:32:54.665294 master-0 kubenswrapper[4409]: I1203 14:32:54.665295 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir\") pod \"43a808e0-babc-45b3-b69f-12cc77801356\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " Dec 03 14:32:54.665585 master-0 kubenswrapper[4409]: I1203 14:32:54.665354 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready\") pod \"43a808e0-babc-45b3-b69f-12cc77801356\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " Dec 03 14:32:54.665585 master-0 kubenswrapper[4409]: I1203 14:32:54.665463 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist\") pod \"43a808e0-babc-45b3-b69f-12cc77801356\" (UID: \"43a808e0-babc-45b3-b69f-12cc77801356\") " Dec 03 14:32:54.665701 master-0 kubenswrapper[4409]: I1203 14:32:54.665651 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "43a808e0-babc-45b3-b69f-12cc77801356" (UID: "43a808e0-babc-45b3-b69f-12cc77801356"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:32:54.665957 master-0 kubenswrapper[4409]: I1203 14:32:54.665929 4409 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43a808e0-babc-45b3-b69f-12cc77801356-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:54.666122 master-0 kubenswrapper[4409]: I1203 14:32:54.666093 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "43a808e0-babc-45b3-b69f-12cc77801356" (UID: "43a808e0-babc-45b3-b69f-12cc77801356"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:32:54.666188 master-0 kubenswrapper[4409]: I1203 14:32:54.666160 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready" (OuterVolumeSpecName: "ready") pod "43a808e0-babc-45b3-b69f-12cc77801356" (UID: "43a808e0-babc-45b3-b69f-12cc77801356"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:32:54.671536 master-0 kubenswrapper[4409]: I1203 14:32:54.671464 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2" (OuterVolumeSpecName: "kube-api-access-vggv2") pod "43a808e0-babc-45b3-b69f-12cc77801356" (UID: "43a808e0-babc-45b3-b69f-12cc77801356"). InnerVolumeSpecName "kube-api-access-vggv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:32:54.767215 master-0 kubenswrapper[4409]: I1203 14:32:54.767130 4409 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/43a808e0-babc-45b3-b69f-12cc77801356-ready\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:54.767215 master-0 kubenswrapper[4409]: I1203 14:32:54.767171 4409 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/43a808e0-babc-45b3-b69f-12cc77801356-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:54.767215 master-0 kubenswrapper[4409]: I1203 14:32:54.767182 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vggv2\" (UniqueName: \"kubernetes.io/projected/43a808e0-babc-45b3-b69f-12cc77801356-kube-api-access-vggv2\") on node \"master-0\" DevicePath \"\"" Dec 03 14:32:54.980835 master-0 kubenswrapper[4409]: I1203 14:32:54.980754 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-4c6x2"] Dec 03 14:32:54.986771 master-0 kubenswrapper[4409]: I1203 14:32:54.986712 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-4c6x2"] Dec 03 14:32:55.831315 master-0 kubenswrapper[4409]: I1203 14:32:55.831200 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a808e0-babc-45b3-b69f-12cc77801356" path="/var/lib/kubelet/pods/43a808e0-babc-45b3-b69f-12cc77801356/volumes" Dec 03 14:32:57.815209 master-0 kubenswrapper[4409]: I1203 14:32:57.815130 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:57.844683 master-0 kubenswrapper[4409]: I1203 14:32:57.844612 4409 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="201c1f63-757e-4efd-8eef-8edaf196315f" Dec 03 14:32:57.844683 master-0 kubenswrapper[4409]: I1203 14:32:57.844674 4409 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="201c1f63-757e-4efd-8eef-8edaf196315f" Dec 03 14:32:57.866351 master-0 kubenswrapper[4409]: I1203 14:32:57.866304 4409 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:57.867201 master-0 kubenswrapper[4409]: I1203 14:32:57.867125 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:32:57.874194 master-0 kubenswrapper[4409]: I1203 14:32:57.874125 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:32:57.884378 master-0 kubenswrapper[4409]: I1203 14:32:57.884328 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:32:57.890870 master-0 kubenswrapper[4409]: I1203 14:32:57.890822 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Dec 03 14:32:58.677039 master-0 kubenswrapper[4409]: I1203 14:32:58.676928 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0ac3cb6f0e48ec17cf6a6842aa7feeff","Type":"ContainerStarted","Data":"7eb7473078318a059f994c7869789be4c8abe18efe27c1bb4df78a7b17c5f7a3"} Dec 03 14:32:58.677039 master-0 kubenswrapper[4409]: I1203 14:32:58.677032 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0ac3cb6f0e48ec17cf6a6842aa7feeff","Type":"ContainerStarted","Data":"0aa269aa51e45586f120b4f2f2483f864d9b83239887e7bcd239595338e44ed2"} Dec 03 14:32:58.677039 master-0 kubenswrapper[4409]: I1203 14:32:58.677049 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0ac3cb6f0e48ec17cf6a6842aa7feeff","Type":"ContainerStarted","Data":"71003e91f2c62ea149b6bd9bed998db377d36b776bb5c33deba21148ecfc92e9"} Dec 03 14:32:58.677039 master-0 kubenswrapper[4409]: I1203 14:32:58.677062 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0ac3cb6f0e48ec17cf6a6842aa7feeff","Type":"ContainerStarted","Data":"8912bbf00f456772c1674aa57ccd596a6eca46865f0fcf2d28aabda752e731b9"} Dec 03 14:32:59.693119 master-0 kubenswrapper[4409]: I1203 14:32:59.693028 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"0ac3cb6f0e48ec17cf6a6842aa7feeff","Type":"ContainerStarted","Data":"43fa65127fc380fc0a8edb7bffe57e69e2606f5c95508610606e6e0ef1dfad35"} Dec 03 14:32:59.876273 master-0 kubenswrapper[4409]: I1203 14:32:59.874429 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.874408592 podStartE2EDuration="2.874408592s" podCreationTimestamp="2025-12-03 14:32:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:32:59.868718189 +0000 UTC m=+412.195780695" watchObservedRunningTime="2025-12-03 14:32:59.874408592 +0000 UTC m=+412.201471098" Dec 03 14:33:01.709938 master-0 kubenswrapper[4409]: I1203 14:33:01.709900 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-84c998f64f-8stq7_38888547-ed48-4f96-810d-bcd04e49bd6b/multus-admission-controller/1.log" Dec 03 14:33:01.710590 master-0 kubenswrapper[4409]: I1203 14:33:01.710566 4409 generic.go:334] "Generic (PLEG): container finished" podID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerID="f707b5a65c5a5509b5382c713002ab668a4590bc8b8a861cec9c1fbd38881498" exitCode=137 Dec 03 14:33:01.710734 master-0 kubenswrapper[4409]: I1203 14:33:01.710681 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerDied","Data":"f707b5a65c5a5509b5382c713002ab668a4590bc8b8a861cec9c1fbd38881498"} Dec 03 14:33:02.436998 master-0 kubenswrapper[4409]: I1203 14:33:02.436957 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-84c998f64f-8stq7_38888547-ed48-4f96-810d-bcd04e49bd6b/multus-admission-controller/1.log" Dec 03 14:33:02.437279 master-0 kubenswrapper[4409]: I1203 14:33:02.437067 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:33:02.535829 master-0 kubenswrapper[4409]: I1203 14:33:02.535642 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") pod \"38888547-ed48-4f96-810d-bcd04e49bd6b\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " Dec 03 14:33:02.535829 master-0 kubenswrapper[4409]: I1203 14:33:02.535791 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") pod \"38888547-ed48-4f96-810d-bcd04e49bd6b\" (UID: \"38888547-ed48-4f96-810d-bcd04e49bd6b\") " Dec 03 14:33:02.540087 master-0 kubenswrapper[4409]: I1203 14:33:02.540021 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m" (OuterVolumeSpecName: "kube-api-access-fdh5m") pod "38888547-ed48-4f96-810d-bcd04e49bd6b" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b"). InnerVolumeSpecName "kube-api-access-fdh5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:33:02.540087 master-0 kubenswrapper[4409]: I1203 14:33:02.540042 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "38888547-ed48-4f96-810d-bcd04e49bd6b" (UID: "38888547-ed48-4f96-810d-bcd04e49bd6b"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:33:02.638397 master-0 kubenswrapper[4409]: I1203 14:33:02.638236 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdh5m\" (UniqueName: \"kubernetes.io/projected/38888547-ed48-4f96-810d-bcd04e49bd6b-kube-api-access-fdh5m\") on node \"master-0\" DevicePath \"\"" Dec 03 14:33:02.638397 master-0 kubenswrapper[4409]: I1203 14:33:02.638292 4409 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38888547-ed48-4f96-810d-bcd04e49bd6b-webhook-certs\") on node \"master-0\" DevicePath \"\"" Dec 03 14:33:02.723194 master-0 kubenswrapper[4409]: I1203 14:33:02.723083 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-84c998f64f-8stq7_38888547-ed48-4f96-810d-bcd04e49bd6b/multus-admission-controller/1.log" Dec 03 14:33:02.724248 master-0 kubenswrapper[4409]: I1203 14:33:02.723206 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" event={"ID":"38888547-ed48-4f96-810d-bcd04e49bd6b","Type":"ContainerDied","Data":"8bad6fc68d181f8eea3007b7d8af3b8a5fd80fc3a0d559583523f7c0f1d226b8"} Dec 03 14:33:02.724248 master-0 kubenswrapper[4409]: I1203 14:33:02.723255 4409 scope.go:117] "RemoveContainer" containerID="87732635fbd1d41343c81f9da0ba28c1db41ebda9b9ab9b235767cc746eb95c8" Dec 03 14:33:02.724248 master-0 kubenswrapper[4409]: I1203 14:33:02.723365 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-84c998f64f-8stq7" Dec 03 14:33:02.750379 master-0 kubenswrapper[4409]: I1203 14:33:02.750336 4409 scope.go:117] "RemoveContainer" containerID="f707b5a65c5a5509b5382c713002ab668a4590bc8b8a861cec9c1fbd38881498" Dec 03 14:33:02.782645 master-0 kubenswrapper[4409]: I1203 14:33:02.782591 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:33:02.788374 master-0 kubenswrapper[4409]: I1203 14:33:02.788345 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-84c998f64f-8stq7"] Dec 03 14:33:03.823933 master-0 kubenswrapper[4409]: I1203 14:33:03.823862 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" path="/var/lib/kubelet/pods/38888547-ed48-4f96-810d-bcd04e49bd6b/volumes" Dec 03 14:33:04.795685 master-0 kubenswrapper[4409]: I1203 14:33:04.795641 4409 patch_prober.go:28] interesting pod/machine-config-daemon-2ztl9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 03 14:33:04.796040 master-0 kubenswrapper[4409]: I1203 14:33:04.795993 4409 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 03 14:33:04.796148 master-0 kubenswrapper[4409]: I1203 14:33:04.796134 4409 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" Dec 03 14:33:04.796891 master-0 kubenswrapper[4409]: I1203 14:33:04.796865 4409 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7361af56804186d5550e28276c7b5d986703bc3ec5adf055804564cd0be8d594"} pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 03 14:33:04.797077 master-0 kubenswrapper[4409]: I1203 14:33:04.797057 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" podUID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerName="machine-config-daemon" containerID="cri-o://7361af56804186d5550e28276c7b5d986703bc3ec5adf055804564cd0be8d594" gracePeriod=600 Dec 03 14:33:05.748285 master-0 kubenswrapper[4409]: I1203 14:33:05.748247 4409 generic.go:334] "Generic (PLEG): container finished" podID="799e819f-f4b2-4ac9-8fa4-7d4da7a79285" containerID="7361af56804186d5550e28276c7b5d986703bc3ec5adf055804564cd0be8d594" exitCode=0 Dec 03 14:33:05.748781 master-0 kubenswrapper[4409]: I1203 14:33:05.748299 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerDied","Data":"7361af56804186d5550e28276c7b5d986703bc3ec5adf055804564cd0be8d594"} Dec 03 14:33:05.748781 master-0 kubenswrapper[4409]: I1203 14:33:05.748337 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2ztl9" event={"ID":"799e819f-f4b2-4ac9-8fa4-7d4da7a79285","Type":"ContainerStarted","Data":"5157d679c76fb7311333bfa2b8a345f4602b9d0a3522596f67360e3c209e2d45"} Dec 03 14:33:05.748781 master-0 kubenswrapper[4409]: I1203 14:33:05.748356 4409 scope.go:117] "RemoveContainer" containerID="8ca2899d1d94113fe8f7d5d0b9046638c7992ece74af0aa660c6dc0d87ac321e" Dec 03 14:33:07.885549 master-0 kubenswrapper[4409]: I1203 14:33:07.885442 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:07.885549 master-0 kubenswrapper[4409]: I1203 14:33:07.885515 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:07.885549 master-0 kubenswrapper[4409]: I1203 14:33:07.885529 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:07.885549 master-0 kubenswrapper[4409]: I1203 14:33:07.885543 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:07.886502 master-0 kubenswrapper[4409]: I1203 14:33:07.885735 4409 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Dec 03 14:33:07.886502 master-0 kubenswrapper[4409]: I1203 14:33:07.885785 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="0ac3cb6f0e48ec17cf6a6842aa7feeff" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Dec 03 14:33:07.890204 master-0 kubenswrapper[4409]: I1203 14:33:07.890150 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:08.141044 master-0 kubenswrapper[4409]: I1203 14:33:08.140889 4409 scope.go:117] "RemoveContainer" containerID="3d1f3a793d5a0fae82d4c06f0434d3fdeff9ab9654978c12b7cd7453e94b4bf0" Dec 03 14:33:08.154769 master-0 kubenswrapper[4409]: I1203 14:33:08.154707 4409 scope.go:117] "RemoveContainer" containerID="03d20d36da747e463e8cb217ec14afd8605f37c0e325d88bd7b1eeb3c83a3a46" Dec 03 14:33:08.168439 master-0 kubenswrapper[4409]: I1203 14:33:08.168397 4409 scope.go:117] "RemoveContainer" containerID="bd32242d7190de96f3c6abe7180471f3ada5a8b12686f28fd14fd86ddfc80860" Dec 03 14:33:08.213540 master-0 kubenswrapper[4409]: E1203 14:33:08.213482 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc\": container with ID starting with 3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc not found: ID does not exist" containerID="3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc" Dec 03 14:33:08.213540 master-0 kubenswrapper[4409]: I1203 14:33:08.213533 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc" err="rpc error: code = NotFound desc = could not find container \"3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc\": container with ID starting with 3d2cb9b6a53d32d6cf0628a4b228f2edb0ff186873907ebbb80dca2725dcb5dc not found: ID does not exist" Dec 03 14:33:08.213893 master-0 kubenswrapper[4409]: E1203 14:33:08.213845 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0\": container with ID starting with 3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0 not found: ID does not exist" containerID="3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0" Dec 03 14:33:08.213893 master-0 kubenswrapper[4409]: I1203 14:33:08.213868 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0" err="rpc error: code = NotFound desc = could not find container \"3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0\": container with ID starting with 3e64495540ec395d09554da0475baa05a8cb6612d73bf6067fdd1c3e298617d0 not found: ID does not exist" Dec 03 14:33:08.214401 master-0 kubenswrapper[4409]: E1203 14:33:08.214338 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb\": container with ID starting with 1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb not found: ID does not exist" containerID="1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb" Dec 03 14:33:08.214401 master-0 kubenswrapper[4409]: I1203 14:33:08.214392 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb" err="rpc error: code = NotFound desc = could not find container \"1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb\": container with ID starting with 1e2924dd466c1833204bd5a0ccb2a3a2ecf229e5b6243efd4e332c22466750eb not found: ID does not exist" Dec 03 14:33:08.215418 master-0 kubenswrapper[4409]: E1203 14:33:08.215366 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769\": container with ID starting with feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769 not found: ID does not exist" containerID="feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769" Dec 03 14:33:08.215418 master-0 kubenswrapper[4409]: I1203 14:33:08.215407 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769" err="rpc error: code = NotFound desc = could not find container \"feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769\": container with ID starting with feecfd8e65a2d8ed0b6f77070376deef980ea7e4712d360f4d69aa8041130769 not found: ID does not exist" Dec 03 14:33:08.215872 master-0 kubenswrapper[4409]: E1203 14:33:08.215819 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3\": container with ID starting with 3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3 not found: ID does not exist" containerID="3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3" Dec 03 14:33:08.215872 master-0 kubenswrapper[4409]: I1203 14:33:08.215853 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3" err="rpc error: code = NotFound desc = could not find container \"3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3\": container with ID starting with 3199f4fb1b4eb47bbbdc965571746168ba828b9258da88d5e6919cff0f5cb3e3 not found: ID does not exist" Dec 03 14:33:08.218346 master-0 kubenswrapper[4409]: E1203 14:33:08.218295 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb\": container with ID starting with 9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb not found: ID does not exist" containerID="9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb" Dec 03 14:33:08.218466 master-0 kubenswrapper[4409]: I1203 14:33:08.218355 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb" err="rpc error: code = NotFound desc = could not find container \"9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb\": container with ID starting with 9d714422207f347ff79197b83132523aed02e0a73fa434f99538b723c542c9cb not found: ID does not exist" Dec 03 14:33:08.219044 master-0 kubenswrapper[4409]: E1203 14:33:08.218988 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1\": container with ID starting with 717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1 not found: ID does not exist" containerID="717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1" Dec 03 14:33:08.219044 master-0 kubenswrapper[4409]: I1203 14:33:08.219034 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1" err="rpc error: code = NotFound desc = could not find container \"717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1\": container with ID starting with 717d46ca9f458e81cb87ca458f62d5ca9435b7a053cd6e36581338a94696eea1 not found: ID does not exist" Dec 03 14:33:08.775793 master-0 kubenswrapper[4409]: I1203 14:33:08.775739 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:17.890241 master-0 kubenswrapper[4409]: I1203 14:33:17.890155 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:17.895217 master-0 kubenswrapper[4409]: I1203 14:33:17.895156 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Dec 03 14:33:25.896761 master-0 kubenswrapper[4409]: I1203 14:33:25.896667 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: E1203 14:33:25.897074 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="kube-rbac-proxy" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897091 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="kube-rbac-proxy" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: E1203 14:33:25.897102 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897109 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: E1203 14:33:25.897121 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237f0597-086e-4a5a-a120-59232c16219e" containerName="installer" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897128 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="237f0597-086e-4a5a-a120-59232c16219e" containerName="installer" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: E1203 14:33:25.897137 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="multus-admission-controller" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897142 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="multus-admission-controller" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897264 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="237f0597-086e-4a5a-a120-59232c16219e" containerName="installer" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897276 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="multus-admission-controller" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897284 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="38888547-ed48-4f96-810d-bcd04e49bd6b" containerName="kube-rbac-proxy" Dec 03 14:33:25.897582 master-0 kubenswrapper[4409]: I1203 14:33:25.897296 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a808e0-babc-45b3-b69f-12cc77801356" containerName="kube-multus-additional-cni-plugins" Dec 03 14:33:25.898067 master-0 kubenswrapper[4409]: I1203 14:33:25.897814 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:25.902873 master-0 kubenswrapper[4409]: I1203 14:33:25.902811 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Dec 03 14:33:25.904257 master-0 kubenswrapper[4409]: I1203 14:33:25.904191 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7ctx2" Dec 03 14:33:25.921813 master-0 kubenswrapper[4409]: I1203 14:33:25.921739 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Dec 03 14:33:26.058087 master-0 kubenswrapper[4409]: I1203 14:33:26.057897 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.058087 master-0 kubenswrapper[4409]: I1203 14:33:26.058079 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.161608 master-0 kubenswrapper[4409]: I1203 14:33:26.160488 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.161608 master-0 kubenswrapper[4409]: I1203 14:33:26.160798 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.161608 master-0 kubenswrapper[4409]: I1203 14:33:26.161064 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.183514 master-0 kubenswrapper[4409]: I1203 14:33:26.183425 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.233796 master-0 kubenswrapper[4409]: I1203 14:33:26.233707 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:26.749079 master-0 kubenswrapper[4409]: I1203 14:33:26.748989 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Dec 03 14:33:26.901271 master-0 kubenswrapper[4409]: I1203 14:33:26.901190 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"77517526-8ef5-41fb-a289-fac59f1ac7f2","Type":"ContainerStarted","Data":"5011d16cc3e21e1923e870a1c23b592d6ab0d4ba3dd477335460a78722c39564"} Dec 03 14:33:27.911196 master-0 kubenswrapper[4409]: I1203 14:33:27.911124 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"77517526-8ef5-41fb-a289-fac59f1ac7f2","Type":"ContainerStarted","Data":"d37a48419f172bac23a187d101f0420e330f914a2233fc1fc419203e1fe7c5a3"} Dec 03 14:33:28.156632 master-0 kubenswrapper[4409]: I1203 14:33:28.156559 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=3.156539398 podStartE2EDuration="3.156539398s" podCreationTimestamp="2025-12-03 14:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:33:28.154389707 +0000 UTC m=+440.481452213" watchObservedRunningTime="2025-12-03 14:33:28.156539398 +0000 UTC m=+440.483601904" Dec 03 14:33:28.920954 master-0 kubenswrapper[4409]: I1203 14:33:28.920889 4409 generic.go:334] "Generic (PLEG): container finished" podID="77517526-8ef5-41fb-a289-fac59f1ac7f2" containerID="d37a48419f172bac23a187d101f0420e330f914a2233fc1fc419203e1fe7c5a3" exitCode=0 Dec 03 14:33:28.920954 master-0 kubenswrapper[4409]: I1203 14:33:28.920954 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"77517526-8ef5-41fb-a289-fac59f1ac7f2","Type":"ContainerDied","Data":"d37a48419f172bac23a187d101f0420e330f914a2233fc1fc419203e1fe7c5a3"} Dec 03 14:33:30.199613 master-0 kubenswrapper[4409]: I1203 14:33:30.199426 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:30.374770 master-0 kubenswrapper[4409]: I1203 14:33:30.374672 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir\") pod \"77517526-8ef5-41fb-a289-fac59f1ac7f2\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " Dec 03 14:33:30.375073 master-0 kubenswrapper[4409]: I1203 14:33:30.374835 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access\") pod \"77517526-8ef5-41fb-a289-fac59f1ac7f2\" (UID: \"77517526-8ef5-41fb-a289-fac59f1ac7f2\") " Dec 03 14:33:30.375073 master-0 kubenswrapper[4409]: I1203 14:33:30.374858 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "77517526-8ef5-41fb-a289-fac59f1ac7f2" (UID: "77517526-8ef5-41fb-a289-fac59f1ac7f2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 03 14:33:30.375500 master-0 kubenswrapper[4409]: I1203 14:33:30.375451 4409 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77517526-8ef5-41fb-a289-fac59f1ac7f2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Dec 03 14:33:30.378103 master-0 kubenswrapper[4409]: I1203 14:33:30.378048 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77517526-8ef5-41fb-a289-fac59f1ac7f2" (UID: "77517526-8ef5-41fb-a289-fac59f1ac7f2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:33:30.476795 master-0 kubenswrapper[4409]: I1203 14:33:30.476656 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77517526-8ef5-41fb-a289-fac59f1ac7f2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Dec 03 14:33:30.934156 master-0 kubenswrapper[4409]: I1203 14:33:30.934092 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Dec 03 14:33:30.934471 master-0 kubenswrapper[4409]: I1203 14:33:30.934098 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"77517526-8ef5-41fb-a289-fac59f1ac7f2","Type":"ContainerDied","Data":"5011d16cc3e21e1923e870a1c23b592d6ab0d4ba3dd477335460a78722c39564"} Dec 03 14:33:30.934471 master-0 kubenswrapper[4409]: I1203 14:33:30.934291 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5011d16cc3e21e1923e870a1c23b592d6ab0d4ba3dd477335460a78722c39564" Dec 03 14:34:08.224753 master-0 kubenswrapper[4409]: I1203 14:34:08.224692 4409 scope.go:117] "RemoveContainer" containerID="3d63c97e8181b8a5a98730093a5cc984581455dd5e6126329dda21ffe29cf740" Dec 03 14:35:53.439933 master-0 kubenswrapper[4409]: I1203 14:35:53.439718 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-b2bbq"] Dec 03 14:35:53.441707 master-0 kubenswrapper[4409]: E1203 14:35:53.441654 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77517526-8ef5-41fb-a289-fac59f1ac7f2" containerName="pruner" Dec 03 14:35:53.441707 master-0 kubenswrapper[4409]: I1203 14:35:53.441695 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="77517526-8ef5-41fb-a289-fac59f1ac7f2" containerName="pruner" Dec 03 14:35:53.441929 master-0 kubenswrapper[4409]: I1203 14:35:53.441898 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="77517526-8ef5-41fb-a289-fac59f1ac7f2" containerName="pruner" Dec 03 14:35:53.442823 master-0 kubenswrapper[4409]: I1203 14:35:53.442788 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.445213 master-0 kubenswrapper[4409]: I1203 14:35:53.445153 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Dec 03 14:35:53.445430 master-0 kubenswrapper[4409]: I1203 14:35:53.445395 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Dec 03 14:35:53.445941 master-0 kubenswrapper[4409]: I1203 14:35:53.445910 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Dec 03 14:35:53.447200 master-0 kubenswrapper[4409]: I1203 14:35:53.447064 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-os-client-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.447292 master-0 kubenswrapper[4409]: I1203 14:35:53.447255 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.447399 master-0 kubenswrapper[4409]: I1203 14:35:53.447124 4409 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Dec 03 14:35:53.447476 master-0 kubenswrapper[4409]: I1203 14:35:53.447460 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b42zx\" (UniqueName: \"kubernetes.io/projected/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-kube-api-access-b42zx\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.472996 master-0 kubenswrapper[4409]: I1203 14:35:53.472664 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-b2bbq"] Dec 03 14:35:53.549091 master-0 kubenswrapper[4409]: I1203 14:35:53.548991 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b42zx\" (UniqueName: \"kubernetes.io/projected/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-kube-api-access-b42zx\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.549406 master-0 kubenswrapper[4409]: I1203 14:35:53.549130 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.549406 master-0 kubenswrapper[4409]: I1203 14:35:53.549165 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-os-client-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.550467 master-0 kubenswrapper[4409]: I1203 14:35:53.550431 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.558351 master-0 kubenswrapper[4409]: I1203 14:35:53.558300 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-os-client-config\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.565398 master-0 kubenswrapper[4409]: I1203 14:35:53.565356 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b42zx\" (UniqueName: \"kubernetes.io/projected/3dce6dc5-1f0d-4b6d-9256-aa4315b129ff-kube-api-access-b42zx\") pod \"sushy-emulator-58f4c9b998-b2bbq\" (UID: \"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:53.793316 master-0 kubenswrapper[4409]: I1203 14:35:53.793164 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:35:54.369403 master-0 kubenswrapper[4409]: I1203 14:35:54.369330 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-b2bbq"] Dec 03 14:35:54.381313 master-0 kubenswrapper[4409]: I1203 14:35:54.380872 4409 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:35:55.178240 master-0 kubenswrapper[4409]: I1203 14:35:55.178139 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" event={"ID":"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff","Type":"ContainerStarted","Data":"4d6b3db993628e3492d1f2c47854377234df757e7364da9990179fa97efb0c51"} Dec 03 14:36:07.282240 master-0 kubenswrapper[4409]: I1203 14:36:07.282037 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" event={"ID":"3dce6dc5-1f0d-4b6d-9256-aa4315b129ff","Type":"ContainerStarted","Data":"26645db1254f9f096996f1e66c1b9247acd024cc7947dadeeb78ddde60aad0c4"} Dec 03 14:36:07.530863 master-0 kubenswrapper[4409]: I1203 14:36:07.530774 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" podStartSLOduration=1.9576741869999998 podStartE2EDuration="14.530754107s" podCreationTimestamp="2025-12-03 14:35:53 +0000 UTC" firstStartedPulling="2025-12-03 14:35:54.38069489 +0000 UTC m=+586.707757396" lastFinishedPulling="2025-12-03 14:36:06.95377481 +0000 UTC m=+599.280837316" observedRunningTime="2025-12-03 14:36:07.527235996 +0000 UTC m=+599.854298512" watchObservedRunningTime="2025-12-03 14:36:07.530754107 +0000 UTC m=+599.857816613" Dec 03 14:36:13.794528 master-0 kubenswrapper[4409]: I1203 14:36:13.794431 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:36:13.795740 master-0 kubenswrapper[4409]: I1203 14:36:13.794565 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:36:13.834238 master-0 kubenswrapper[4409]: I1203 14:36:13.834114 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:36:14.347923 master-0 kubenswrapper[4409]: I1203 14:36:14.347854 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-b2bbq" Dec 03 14:36:45.545043 master-0 kubenswrapper[4409]: I1203 14:36:45.544945 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd"] Dec 03 14:36:45.546649 master-0 kubenswrapper[4409]: I1203 14:36:45.546607 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.559500 master-0 kubenswrapper[4409]: I1203 14:36:45.558917 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k669g\" (UniqueName: \"kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.559500 master-0 kubenswrapper[4409]: I1203 14:36:45.559101 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.559500 master-0 kubenswrapper[4409]: I1203 14:36:45.559203 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.565080 master-0 kubenswrapper[4409]: I1203 14:36:45.565000 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd"] Dec 03 14:36:45.662156 master-0 kubenswrapper[4409]: I1203 14:36:45.661584 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.662156 master-0 kubenswrapper[4409]: I1203 14:36:45.661704 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.662156 master-0 kubenswrapper[4409]: I1203 14:36:45.661792 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k669g\" (UniqueName: \"kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.664252 master-0 kubenswrapper[4409]: I1203 14:36:45.662256 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.664252 master-0 kubenswrapper[4409]: I1203 14:36:45.662410 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.679331 master-0 kubenswrapper[4409]: I1203 14:36:45.679266 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k669g\" (UniqueName: \"kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:45.866658 master-0 kubenswrapper[4409]: I1203 14:36:45.866533 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:46.369947 master-0 kubenswrapper[4409]: I1203 14:36:46.369838 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd"] Dec 03 14:36:46.372532 master-0 kubenswrapper[4409]: W1203 14:36:46.372479 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab4bb5f1_b26e_4cc3_b1af_131e498bba3d.slice/crio-b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94 WatchSource:0}: Error finding container b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94: Status 404 returned error can't find the container with id b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94 Dec 03 14:36:46.603964 master-0 kubenswrapper[4409]: I1203 14:36:46.603866 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerStarted","Data":"a55d47440f004b724a458198f08d3208aea3db2ab7073fa56c06e9616efa1a9f"} Dec 03 14:36:46.604662 master-0 kubenswrapper[4409]: I1203 14:36:46.603997 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerStarted","Data":"b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94"} Dec 03 14:36:47.619888 master-0 kubenswrapper[4409]: I1203 14:36:47.619743 4409 generic.go:334] "Generic (PLEG): container finished" podID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerID="a55d47440f004b724a458198f08d3208aea3db2ab7073fa56c06e9616efa1a9f" exitCode=0 Dec 03 14:36:47.620748 master-0 kubenswrapper[4409]: I1203 14:36:47.619900 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerDied","Data":"a55d47440f004b724a458198f08d3208aea3db2ab7073fa56c06e9616efa1a9f"} Dec 03 14:36:49.652851 master-0 kubenswrapper[4409]: I1203 14:36:49.652644 4409 generic.go:334] "Generic (PLEG): container finished" podID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerID="dff83c28c7c1b72729ee1ea1cab95ad6b42cdb71c2103eda13d0aca77c3cf497" exitCode=0 Dec 03 14:36:49.653734 master-0 kubenswrapper[4409]: I1203 14:36:49.652723 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerDied","Data":"dff83c28c7c1b72729ee1ea1cab95ad6b42cdb71c2103eda13d0aca77c3cf497"} Dec 03 14:36:50.669156 master-0 kubenswrapper[4409]: I1203 14:36:50.669088 4409 generic.go:334] "Generic (PLEG): container finished" podID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerID="7652b1277cd1c6f9d93988e3b81a8a35262bc4a4dc5f6d61f65da9a5f207d813" exitCode=0 Dec 03 14:36:50.669783 master-0 kubenswrapper[4409]: I1203 14:36:50.669270 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerDied","Data":"7652b1277cd1c6f9d93988e3b81a8a35262bc4a4dc5f6d61f65da9a5f207d813"} Dec 03 14:36:52.019186 master-0 kubenswrapper[4409]: I1203 14:36:52.019114 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:52.117274 master-0 kubenswrapper[4409]: I1203 14:36:52.117213 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util\") pod \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " Dec 03 14:36:52.117559 master-0 kubenswrapper[4409]: I1203 14:36:52.117396 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle\") pod \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " Dec 03 14:36:52.117559 master-0 kubenswrapper[4409]: I1203 14:36:52.117522 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k669g\" (UniqueName: \"kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g\") pod \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\" (UID: \"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d\") " Dec 03 14:36:52.118337 master-0 kubenswrapper[4409]: I1203 14:36:52.118278 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle" (OuterVolumeSpecName: "bundle") pod "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" (UID: "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:36:52.120623 master-0 kubenswrapper[4409]: I1203 14:36:52.120578 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g" (OuterVolumeSpecName: "kube-api-access-k669g") pod "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" (UID: "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d"). InnerVolumeSpecName "kube-api-access-k669g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:36:52.129326 master-0 kubenswrapper[4409]: I1203 14:36:52.129247 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util" (OuterVolumeSpecName: "util") pod "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" (UID: "ab4bb5f1-b26e-4cc3-b1af-131e498bba3d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:36:52.220063 master-0 kubenswrapper[4409]: I1203 14:36:52.219927 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:36:52.220063 master-0 kubenswrapper[4409]: I1203 14:36:52.219975 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k669g\" (UniqueName: \"kubernetes.io/projected/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-kube-api-access-k669g\") on node \"master-0\" DevicePath \"\"" Dec 03 14:36:52.220063 master-0 kubenswrapper[4409]: I1203 14:36:52.219989 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ab4bb5f1-b26e-4cc3-b1af-131e498bba3d-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:36:52.685783 master-0 kubenswrapper[4409]: I1203 14:36:52.685701 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" event={"ID":"ab4bb5f1-b26e-4cc3-b1af-131e498bba3d","Type":"ContainerDied","Data":"b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94"} Dec 03 14:36:52.685783 master-0 kubenswrapper[4409]: I1203 14:36:52.685749 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01f5cebb7b0b344eae9277e199f209e767d85fe39c1f9dd05cc16bebf7e9c94" Dec 03 14:36:52.685783 master-0 kubenswrapper[4409]: I1203 14:36:52.685771 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47jnjd" Dec 03 14:36:58.666195 master-0 kubenswrapper[4409]: I1203 14:36:58.666125 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7b9fc4788d-lj42f"] Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: E1203 14:36:58.666518 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="util" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: I1203 14:36:58.666536 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="util" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: E1203 14:36:58.666547 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="pull" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: I1203 14:36:58.666554 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="pull" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: E1203 14:36:58.666575 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="extract" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: I1203 14:36:58.666586 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="extract" Dec 03 14:36:58.666878 master-0 kubenswrapper[4409]: I1203 14:36:58.666744 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4bb5f1-b26e-4cc3-b1af-131e498bba3d" containerName="extract" Dec 03 14:36:58.667410 master-0 kubenswrapper[4409]: I1203 14:36:58.667375 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.670981 master-0 kubenswrapper[4409]: I1203 14:36:58.670946 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Dec 03 14:36:58.671088 master-0 kubenswrapper[4409]: I1203 14:36:58.670990 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Dec 03 14:36:58.672943 master-0 kubenswrapper[4409]: I1203 14:36:58.672918 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Dec 03 14:36:58.673586 master-0 kubenswrapper[4409]: I1203 14:36:58.673564 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Dec 03 14:36:58.674782 master-0 kubenswrapper[4409]: I1203 14:36:58.674761 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Dec 03 14:36:58.698017 master-0 kubenswrapper[4409]: I1203 14:36:58.697922 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7b9fc4788d-lj42f"] Dec 03 14:36:58.737995 master-0 kubenswrapper[4409]: I1203 14:36:58.735423 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-socket-dir\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.737995 master-0 kubenswrapper[4409]: I1203 14:36:58.735522 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mcwf\" (UniqueName: \"kubernetes.io/projected/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-kube-api-access-5mcwf\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.737995 master-0 kubenswrapper[4409]: I1203 14:36:58.735573 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-webhook-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.737995 master-0 kubenswrapper[4409]: I1203 14:36:58.735607 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-apiservice-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.737995 master-0 kubenswrapper[4409]: I1203 14:36:58.735649 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-metrics-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.837344 master-0 kubenswrapper[4409]: I1203 14:36:58.837272 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-apiservice-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.837344 master-0 kubenswrapper[4409]: I1203 14:36:58.837340 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-metrics-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.837662 master-0 kubenswrapper[4409]: I1203 14:36:58.837408 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-socket-dir\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.837662 master-0 kubenswrapper[4409]: I1203 14:36:58.837448 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mcwf\" (UniqueName: \"kubernetes.io/projected/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-kube-api-access-5mcwf\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.837662 master-0 kubenswrapper[4409]: I1203 14:36:58.837487 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-webhook-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.838255 master-0 kubenswrapper[4409]: I1203 14:36:58.838196 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-socket-dir\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.846565 master-0 kubenswrapper[4409]: I1203 14:36:58.846506 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-webhook-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.846821 master-0 kubenswrapper[4409]: I1203 14:36:58.846774 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-metrics-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.846909 master-0 kubenswrapper[4409]: I1203 14:36:58.846795 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-apiservice-cert\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.856727 master-0 kubenswrapper[4409]: I1203 14:36:58.856610 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mcwf\" (UniqueName: \"kubernetes.io/projected/ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e-kube-api-access-5mcwf\") pod \"lvms-operator-7b9fc4788d-lj42f\" (UID: \"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e\") " pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:58.985752 master-0 kubenswrapper[4409]: I1203 14:36:58.985582 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:36:59.451206 master-0 kubenswrapper[4409]: I1203 14:36:59.451115 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7b9fc4788d-lj42f"] Dec 03 14:36:59.454868 master-0 kubenswrapper[4409]: W1203 14:36:59.454812 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef5f4f4d_a7f2_4d5d_9474_f92108be4c1e.slice/crio-cd0d2b804bc299751573f9496f108e27443690c94b0e9244c99ae431d30be685 WatchSource:0}: Error finding container cd0d2b804bc299751573f9496f108e27443690c94b0e9244c99ae431d30be685: Status 404 returned error can't find the container with id cd0d2b804bc299751573f9496f108e27443690c94b0e9244c99ae431d30be685 Dec 03 14:36:59.763926 master-0 kubenswrapper[4409]: I1203 14:36:59.763769 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" event={"ID":"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e","Type":"ContainerStarted","Data":"cd0d2b804bc299751573f9496f108e27443690c94b0e9244c99ae431d30be685"} Dec 03 14:37:04.872982 master-0 kubenswrapper[4409]: I1203 14:37:04.872833 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" event={"ID":"ef5f4f4d-a7f2-4d5d-9474-f92108be4c1e","Type":"ContainerStarted","Data":"1fbb85dc0aca03fd0fb35c82219737fb52ea9ca598edbd30217eaa4363f3a84c"} Dec 03 14:37:04.874323 master-0 kubenswrapper[4409]: I1203 14:37:04.873513 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:37:04.898992 master-0 kubenswrapper[4409]: I1203 14:37:04.898896 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" podStartSLOduration=2.392286779 podStartE2EDuration="6.898873004s" podCreationTimestamp="2025-12-03 14:36:58 +0000 UTC" firstStartedPulling="2025-12-03 14:36:59.45783871 +0000 UTC m=+651.784901216" lastFinishedPulling="2025-12-03 14:37:03.964424935 +0000 UTC m=+656.291487441" observedRunningTime="2025-12-03 14:37:04.894281152 +0000 UTC m=+657.221343668" watchObservedRunningTime="2025-12-03 14:37:04.898873004 +0000 UTC m=+657.225935510" Dec 03 14:37:05.883766 master-0 kubenswrapper[4409]: I1203 14:37:05.883702 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7b9fc4788d-lj42f" Dec 03 14:37:09.932791 master-0 kubenswrapper[4409]: I1203 14:37:09.932709 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb"] Dec 03 14:37:09.935471 master-0 kubenswrapper[4409]: I1203 14:37:09.935426 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:09.984619 master-0 kubenswrapper[4409]: I1203 14:37:09.984561 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb"] Dec 03 14:37:10.058383 master-0 kubenswrapper[4409]: I1203 14:37:10.058296 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.058383 master-0 kubenswrapper[4409]: I1203 14:37:10.058374 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4hbv\" (UniqueName: \"kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.058729 master-0 kubenswrapper[4409]: I1203 14:37:10.058461 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.160088 master-0 kubenswrapper[4409]: I1203 14:37:10.160037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4hbv\" (UniqueName: \"kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.160349 master-0 kubenswrapper[4409]: I1203 14:37:10.160125 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.160349 master-0 kubenswrapper[4409]: I1203 14:37:10.160246 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.160913 master-0 kubenswrapper[4409]: I1203 14:37:10.160881 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.161159 master-0 kubenswrapper[4409]: I1203 14:37:10.161109 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.176067 master-0 kubenswrapper[4409]: I1203 14:37:10.176000 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4hbv\" (UniqueName: \"kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.286614 master-0 kubenswrapper[4409]: I1203 14:37:10.286493 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:10.796455 master-0 kubenswrapper[4409]: I1203 14:37:10.796389 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb"] Dec 03 14:37:10.800642 master-0 kubenswrapper[4409]: W1203 14:37:10.800585 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe9fa6f_d08e_4a39_9a5a_1f03b695c117.slice/crio-f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240 WatchSource:0}: Error finding container f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240: Status 404 returned error can't find the container with id f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240 Dec 03 14:37:10.921921 master-0 kubenswrapper[4409]: I1203 14:37:10.921847 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" event={"ID":"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117","Type":"ContainerStarted","Data":"f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240"} Dec 03 14:37:11.931458 master-0 kubenswrapper[4409]: I1203 14:37:11.931380 4409 generic.go:334] "Generic (PLEG): container finished" podID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerID="88e3a352910d27d71ab01998539f41d5221480bea249f9074df4401a1f499630" exitCode=0 Dec 03 14:37:11.931458 master-0 kubenswrapper[4409]: I1203 14:37:11.931440 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" event={"ID":"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117","Type":"ContainerDied","Data":"88e3a352910d27d71ab01998539f41d5221480bea249f9074df4401a1f499630"} Dec 03 14:37:12.390339 master-0 kubenswrapper[4409]: I1203 14:37:12.390229 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9"] Dec 03 14:37:12.391947 master-0 kubenswrapper[4409]: I1203 14:37:12.391900 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.413786 master-0 kubenswrapper[4409]: I1203 14:37:12.409340 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9"] Dec 03 14:37:12.459509 master-0 kubenswrapper[4409]: I1203 14:37:12.459424 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl6zw\" (UniqueName: \"kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.459509 master-0 kubenswrapper[4409]: I1203 14:37:12.459501 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.459837 master-0 kubenswrapper[4409]: I1203 14:37:12.459563 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.568995 master-0 kubenswrapper[4409]: I1203 14:37:12.568915 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.569319 master-0 kubenswrapper[4409]: I1203 14:37:12.569103 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.569319 master-0 kubenswrapper[4409]: I1203 14:37:12.569289 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl6zw\" (UniqueName: \"kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.569627 master-0 kubenswrapper[4409]: I1203 14:37:12.569579 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.569718 master-0 kubenswrapper[4409]: I1203 14:37:12.569580 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.586358 master-0 kubenswrapper[4409]: I1203 14:37:12.586279 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl6zw\" (UniqueName: \"kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:12.710256 master-0 kubenswrapper[4409]: I1203 14:37:12.710110 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:13.090557 master-0 kubenswrapper[4409]: I1203 14:37:13.090485 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r"] Dec 03 14:37:13.094915 master-0 kubenswrapper[4409]: I1203 14:37:13.094874 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.104260 master-0 kubenswrapper[4409]: I1203 14:37:13.104181 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r"] Dec 03 14:37:13.181615 master-0 kubenswrapper[4409]: I1203 14:37:13.181507 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg7qp\" (UniqueName: \"kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.181938 master-0 kubenswrapper[4409]: I1203 14:37:13.181668 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.182074 master-0 kubenswrapper[4409]: I1203 14:37:13.181981 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.203675 master-0 kubenswrapper[4409]: W1203 14:37:13.203614 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9a4a36_b310_4d20_8e41_f4ff9045bb06.slice/crio-fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37 WatchSource:0}: Error finding container fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37: Status 404 returned error can't find the container with id fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37 Dec 03 14:37:13.206705 master-0 kubenswrapper[4409]: I1203 14:37:13.206645 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9"] Dec 03 14:37:13.284031 master-0 kubenswrapper[4409]: I1203 14:37:13.283940 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.284376 master-0 kubenswrapper[4409]: I1203 14:37:13.284226 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg7qp\" (UniqueName: \"kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.284376 master-0 kubenswrapper[4409]: I1203 14:37:13.284267 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.284645 master-0 kubenswrapper[4409]: I1203 14:37:13.284551 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.285135 master-0 kubenswrapper[4409]: I1203 14:37:13.285086 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.302742 master-0 kubenswrapper[4409]: I1203 14:37:13.302652 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg7qp\" (UniqueName: \"kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.420954 master-0 kubenswrapper[4409]: I1203 14:37:13.420780 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:13.632582 master-0 kubenswrapper[4409]: I1203 14:37:13.632435 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:13.636228 master-0 kubenswrapper[4409]: I1203 14:37:13.636172 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.657329 master-0 kubenswrapper[4409]: I1203 14:37:13.657217 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:13.692356 master-0 kubenswrapper[4409]: I1203 14:37:13.692297 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.692620 master-0 kubenswrapper[4409]: I1203 14:37:13.692421 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dkx\" (UniqueName: \"kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.692697 master-0 kubenswrapper[4409]: I1203 14:37:13.692629 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.800031 master-0 kubenswrapper[4409]: I1203 14:37:13.799305 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.800031 master-0 kubenswrapper[4409]: I1203 14:37:13.799456 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.800031 master-0 kubenswrapper[4409]: I1203 14:37:13.799509 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dkx\" (UniqueName: \"kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.800443 master-0 kubenswrapper[4409]: I1203 14:37:13.800408 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.801816 master-0 kubenswrapper[4409]: I1203 14:37:13.801766 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.820042 master-0 kubenswrapper[4409]: I1203 14:37:13.819940 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dkx\" (UniqueName: \"kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx\") pod \"redhat-operators-9jwkv\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:13.859430 master-0 kubenswrapper[4409]: I1203 14:37:13.859364 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r"] Dec 03 14:37:13.948368 master-0 kubenswrapper[4409]: I1203 14:37:13.948253 4409 generic.go:334] "Generic (PLEG): container finished" podID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerID="4e197cfb66a13ce0f981abc504fe242df4660b6bdfc74b40217e5f1ec88bc9ce" exitCode=0 Dec 03 14:37:13.948368 master-0 kubenswrapper[4409]: I1203 14:37:13.948314 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" event={"ID":"9f9a4a36-b310-4d20-8e41-f4ff9045bb06","Type":"ContainerDied","Data":"4e197cfb66a13ce0f981abc504fe242df4660b6bdfc74b40217e5f1ec88bc9ce"} Dec 03 14:37:13.948368 master-0 kubenswrapper[4409]: I1203 14:37:13.948349 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" event={"ID":"9f9a4a36-b310-4d20-8e41-f4ff9045bb06","Type":"ContainerStarted","Data":"fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37"} Dec 03 14:37:13.981142 master-0 kubenswrapper[4409]: I1203 14:37:13.981079 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:15.091616 master-0 kubenswrapper[4409]: W1203 14:37:15.091518 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d3afd39_fd39_4522_ae52_f7122f09e745.slice/crio-ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49 WatchSource:0}: Error finding container ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49: Status 404 returned error can't find the container with id ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49 Dec 03 14:37:15.530660 master-0 kubenswrapper[4409]: I1203 14:37:15.530600 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:15.964577 master-0 kubenswrapper[4409]: I1203 14:37:15.964411 4409 generic.go:334] "Generic (PLEG): container finished" podID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerID="8bc5b926de045cef334eabdc5aa434a9c0b679a21e7d118e75e912316b442893" exitCode=0 Dec 03 14:37:15.964577 master-0 kubenswrapper[4409]: I1203 14:37:15.964522 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" event={"ID":"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117","Type":"ContainerDied","Data":"8bc5b926de045cef334eabdc5aa434a9c0b679a21e7d118e75e912316b442893"} Dec 03 14:37:15.968538 master-0 kubenswrapper[4409]: I1203 14:37:15.967951 4409 generic.go:334] "Generic (PLEG): container finished" podID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerID="e739bb159389f5131a976a6beea70c4b704c1c4b8611f06bad502a0b7776f685" exitCode=0 Dec 03 14:37:15.968538 master-0 kubenswrapper[4409]: I1203 14:37:15.968023 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" event={"ID":"9f9a4a36-b310-4d20-8e41-f4ff9045bb06","Type":"ContainerDied","Data":"e739bb159389f5131a976a6beea70c4b704c1c4b8611f06bad502a0b7776f685"} Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.976383 4409 generic.go:334] "Generic (PLEG): container finished" podID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerID="415295fe70809ed1016f97975f236fdd92ea2df4eb152d34079156ba823048a4" exitCode=0 Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.976609 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerDied","Data":"415295fe70809ed1016f97975f236fdd92ea2df4eb152d34079156ba823048a4"} Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.976660 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerStarted","Data":"fc05710759e03f6e667f113e699f34a5b96735771e69dee1563a670c263f7b4e"} Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.979339 4409 generic.go:334] "Generic (PLEG): container finished" podID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerID="1e78f79e93a4f63448608588a6f79e59b7d8f09f3b2e3f09770031d08fa6c566" exitCode=0 Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.979404 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" event={"ID":"4d3afd39-fd39-4522-ae52-f7122f09e745","Type":"ContainerDied","Data":"1e78f79e93a4f63448608588a6f79e59b7d8f09f3b2e3f09770031d08fa6c566"} Dec 03 14:37:15.991334 master-0 kubenswrapper[4409]: I1203 14:37:15.979436 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" event={"ID":"4d3afd39-fd39-4522-ae52-f7122f09e745","Type":"ContainerStarted","Data":"ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49"} Dec 03 14:37:16.997913 master-0 kubenswrapper[4409]: I1203 14:37:16.996962 4409 generic.go:334] "Generic (PLEG): container finished" podID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerID="e52b2bbf0ad7e74d245614298c335ab13c5243e41879d10be1e2a4912295fd5f" exitCode=0 Dec 03 14:37:16.997913 master-0 kubenswrapper[4409]: I1203 14:37:16.997082 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" event={"ID":"9f9a4a36-b310-4d20-8e41-f4ff9045bb06","Type":"ContainerDied","Data":"e52b2bbf0ad7e74d245614298c335ab13c5243e41879d10be1e2a4912295fd5f"} Dec 03 14:37:17.005875 master-0 kubenswrapper[4409]: I1203 14:37:17.002266 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9"] Dec 03 14:37:17.005875 master-0 kubenswrapper[4409]: I1203 14:37:17.005205 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.011037 master-0 kubenswrapper[4409]: I1203 14:37:17.008869 4409 generic.go:334] "Generic (PLEG): container finished" podID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerID="3b6d47bb280ae51de96233cddf889a2e11e49ef89d316e22dbda25642576c511" exitCode=0 Dec 03 14:37:17.011037 master-0 kubenswrapper[4409]: I1203 14:37:17.009044 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" event={"ID":"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117","Type":"ContainerDied","Data":"3b6d47bb280ae51de96233cddf889a2e11e49ef89d316e22dbda25642576c511"} Dec 03 14:37:17.017105 master-0 kubenswrapper[4409]: I1203 14:37:17.014042 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9"] Dec 03 14:37:17.059203 master-0 kubenswrapper[4409]: I1203 14:37:17.059127 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.059460 master-0 kubenswrapper[4409]: I1203 14:37:17.059216 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.059460 master-0 kubenswrapper[4409]: I1203 14:37:17.059257 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mfkb\" (UniqueName: \"kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.161381 master-0 kubenswrapper[4409]: I1203 14:37:17.160834 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.162814 master-0 kubenswrapper[4409]: I1203 14:37:17.162780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.162891 master-0 kubenswrapper[4409]: I1203 14:37:17.161565 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.162891 master-0 kubenswrapper[4409]: I1203 14:37:17.162847 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mfkb\" (UniqueName: \"kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.163241 master-0 kubenswrapper[4409]: I1203 14:37:17.163205 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.187466 master-0 kubenswrapper[4409]: I1203 14:37:17.187395 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mfkb\" (UniqueName: \"kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:17.355562 master-0 kubenswrapper[4409]: I1203 14:37:17.355383 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:18.017314 master-0 kubenswrapper[4409]: I1203 14:37:18.017209 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerStarted","Data":"1deda178a218529a38a8cc47c2aacc9fd472c39ec3fc0c634cf1d3bb3597414e"} Dec 03 14:37:18.020375 master-0 kubenswrapper[4409]: I1203 14:37:18.020308 4409 generic.go:334] "Generic (PLEG): container finished" podID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerID="db73c0565f113588bb6bba5979c9661babcde237c964a933ea8361a8342785b1" exitCode=0 Dec 03 14:37:18.020522 master-0 kubenswrapper[4409]: I1203 14:37:18.020368 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" event={"ID":"4d3afd39-fd39-4522-ae52-f7122f09e745","Type":"ContainerDied","Data":"db73c0565f113588bb6bba5979c9661babcde237c964a933ea8361a8342785b1"} Dec 03 14:37:18.442749 master-0 kubenswrapper[4409]: I1203 14:37:18.442703 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:18.447066 master-0 kubenswrapper[4409]: I1203 14:37:18.446998 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:18.484738 master-0 kubenswrapper[4409]: I1203 14:37:18.484630 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle\") pod \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " Dec 03 14:37:18.484738 master-0 kubenswrapper[4409]: I1203 14:37:18.484726 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle\") pod \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " Dec 03 14:37:18.485355 master-0 kubenswrapper[4409]: I1203 14:37:18.484839 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util\") pod \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " Dec 03 14:37:18.485355 master-0 kubenswrapper[4409]: I1203 14:37:18.484871 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util\") pod \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " Dec 03 14:37:18.485355 master-0 kubenswrapper[4409]: I1203 14:37:18.484955 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl6zw\" (UniqueName: \"kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw\") pod \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\" (UID: \"9f9a4a36-b310-4d20-8e41-f4ff9045bb06\") " Dec 03 14:37:18.485355 master-0 kubenswrapper[4409]: I1203 14:37:18.485032 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4hbv\" (UniqueName: \"kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv\") pod \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\" (UID: \"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117\") " Dec 03 14:37:18.486442 master-0 kubenswrapper[4409]: I1203 14:37:18.486320 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle" (OuterVolumeSpecName: "bundle") pod "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" (UID: "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:18.486775 master-0 kubenswrapper[4409]: I1203 14:37:18.486377 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle" (OuterVolumeSpecName: "bundle") pod "9f9a4a36-b310-4d20-8e41-f4ff9045bb06" (UID: "9f9a4a36-b310-4d20-8e41-f4ff9045bb06"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:18.488446 master-0 kubenswrapper[4409]: I1203 14:37:18.488370 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw" (OuterVolumeSpecName: "kube-api-access-gl6zw") pod "9f9a4a36-b310-4d20-8e41-f4ff9045bb06" (UID: "9f9a4a36-b310-4d20-8e41-f4ff9045bb06"). InnerVolumeSpecName "kube-api-access-gl6zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:37:18.488446 master-0 kubenswrapper[4409]: I1203 14:37:18.488411 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv" (OuterVolumeSpecName: "kube-api-access-b4hbv") pod "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" (UID: "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117"). InnerVolumeSpecName "kube-api-access-b4hbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:37:18.498150 master-0 kubenswrapper[4409]: I1203 14:37:18.497979 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util" (OuterVolumeSpecName: "util") pod "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" (UID: "dfe9fa6f-d08e-4a39-9a5a-1f03b695c117"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:18.503403 master-0 kubenswrapper[4409]: I1203 14:37:18.503282 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util" (OuterVolumeSpecName: "util") pod "9f9a4a36-b310-4d20-8e41-f4ff9045bb06" (UID: "9f9a4a36-b310-4d20-8e41-f4ff9045bb06"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:18.568697 master-0 kubenswrapper[4409]: I1203 14:37:18.567837 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9"] Dec 03 14:37:18.599210 master-0 kubenswrapper[4409]: I1203 14:37:18.599152 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4hbv\" (UniqueName: \"kubernetes.io/projected/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-kube-api-access-b4hbv\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:18.599210 master-0 kubenswrapper[4409]: I1203 14:37:18.599203 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:18.599485 master-0 kubenswrapper[4409]: I1203 14:37:18.599221 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:18.599485 master-0 kubenswrapper[4409]: I1203 14:37:18.599237 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:18.599485 master-0 kubenswrapper[4409]: I1203 14:37:18.599252 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dfe9fa6f-d08e-4a39-9a5a-1f03b695c117-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:18.599485 master-0 kubenswrapper[4409]: I1203 14:37:18.599266 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl6zw\" (UniqueName: \"kubernetes.io/projected/9f9a4a36-b310-4d20-8e41-f4ff9045bb06-kube-api-access-gl6zw\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:19.036287 master-0 kubenswrapper[4409]: I1203 14:37:19.036179 4409 generic.go:334] "Generic (PLEG): container finished" podID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerID="c54bfd1fe4a5bea8cad2b302c3d13b18393ab0d373d593d1fe374c458837ec41" exitCode=0 Dec 03 14:37:19.037214 master-0 kubenswrapper[4409]: I1203 14:37:19.036378 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" event={"ID":"4d3afd39-fd39-4522-ae52-f7122f09e745","Type":"ContainerDied","Data":"c54bfd1fe4a5bea8cad2b302c3d13b18393ab0d373d593d1fe374c458837ec41"} Dec 03 14:37:19.040535 master-0 kubenswrapper[4409]: I1203 14:37:19.040493 4409 generic.go:334] "Generic (PLEG): container finished" podID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerID="82dabf58e57fce6550950ab1922f1d8c590dfefbf2cecc326bcd96f22f22801c" exitCode=0 Dec 03 14:37:19.040693 master-0 kubenswrapper[4409]: I1203 14:37:19.040635 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" event={"ID":"ac6a0863-f90a-4da1-8c65-06d1851b0174","Type":"ContainerDied","Data":"82dabf58e57fce6550950ab1922f1d8c590dfefbf2cecc326bcd96f22f22801c"} Dec 03 14:37:19.040761 master-0 kubenswrapper[4409]: I1203 14:37:19.040719 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" event={"ID":"ac6a0863-f90a-4da1-8c65-06d1851b0174","Type":"ContainerStarted","Data":"5f65d94dff596f4e2e9a1d38e316c406ca95ee5d213f2f6d4de1b9af661e7943"} Dec 03 14:37:19.044069 master-0 kubenswrapper[4409]: I1203 14:37:19.043819 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" event={"ID":"dfe9fa6f-d08e-4a39-9a5a-1f03b695c117","Type":"ContainerDied","Data":"f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240"} Dec 03 14:37:19.044069 master-0 kubenswrapper[4409]: I1203 14:37:19.043888 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7a1a60c0fd3846fcf3298001dbfb693f38928cc89127ff4e699f687c97e0240" Dec 03 14:37:19.044069 master-0 kubenswrapper[4409]: I1203 14:37:19.044000 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5xzbb" Dec 03 14:37:19.059648 master-0 kubenswrapper[4409]: I1203 14:37:19.059601 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" Dec 03 14:37:19.059648 master-0 kubenswrapper[4409]: I1203 14:37:19.059612 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83fc7t9" event={"ID":"9f9a4a36-b310-4d20-8e41-f4ff9045bb06","Type":"ContainerDied","Data":"fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37"} Dec 03 14:37:19.060143 master-0 kubenswrapper[4409]: I1203 14:37:19.059667 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7ed33ba3fb4c0eb180ed356469f76b1a491f87d83ea95b4a54562037315b37" Dec 03 14:37:19.064321 master-0 kubenswrapper[4409]: I1203 14:37:19.064273 4409 generic.go:334] "Generic (PLEG): container finished" podID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerID="1deda178a218529a38a8cc47c2aacc9fd472c39ec3fc0c634cf1d3bb3597414e" exitCode=0 Dec 03 14:37:19.064674 master-0 kubenswrapper[4409]: I1203 14:37:19.064329 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerDied","Data":"1deda178a218529a38a8cc47c2aacc9fd472c39ec3fc0c634cf1d3bb3597414e"} Dec 03 14:37:20.073944 master-0 kubenswrapper[4409]: I1203 14:37:20.073884 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerStarted","Data":"f9e527141a3029b6ca51c4a953c344811c78001305917acd2b5d41442aa0f2a1"} Dec 03 14:37:20.098741 master-0 kubenswrapper[4409]: I1203 14:37:20.098650 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jwkv" podStartSLOduration=3.528696358 podStartE2EDuration="7.098608091s" podCreationTimestamp="2025-12-03 14:37:13 +0000 UTC" firstStartedPulling="2025-12-03 14:37:15.988925882 +0000 UTC m=+668.315988388" lastFinishedPulling="2025-12-03 14:37:19.558837625 +0000 UTC m=+671.885900121" observedRunningTime="2025-12-03 14:37:20.098384624 +0000 UTC m=+672.425447150" watchObservedRunningTime="2025-12-03 14:37:20.098608091 +0000 UTC m=+672.425670617" Dec 03 14:37:20.441244 master-0 kubenswrapper[4409]: I1203 14:37:20.440769 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:20.533280 master-0 kubenswrapper[4409]: I1203 14:37:20.533105 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg7qp\" (UniqueName: \"kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp\") pod \"4d3afd39-fd39-4522-ae52-f7122f09e745\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " Dec 03 14:37:20.533280 master-0 kubenswrapper[4409]: I1203 14:37:20.533239 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle\") pod \"4d3afd39-fd39-4522-ae52-f7122f09e745\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " Dec 03 14:37:20.533714 master-0 kubenswrapper[4409]: I1203 14:37:20.533412 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util\") pod \"4d3afd39-fd39-4522-ae52-f7122f09e745\" (UID: \"4d3afd39-fd39-4522-ae52-f7122f09e745\") " Dec 03 14:37:20.535112 master-0 kubenswrapper[4409]: I1203 14:37:20.535052 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle" (OuterVolumeSpecName: "bundle") pod "4d3afd39-fd39-4522-ae52-f7122f09e745" (UID: "4d3afd39-fd39-4522-ae52-f7122f09e745"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:20.537659 master-0 kubenswrapper[4409]: I1203 14:37:20.537613 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp" (OuterVolumeSpecName: "kube-api-access-jg7qp") pod "4d3afd39-fd39-4522-ae52-f7122f09e745" (UID: "4d3afd39-fd39-4522-ae52-f7122f09e745"). InnerVolumeSpecName "kube-api-access-jg7qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:37:20.562089 master-0 kubenswrapper[4409]: I1203 14:37:20.561943 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util" (OuterVolumeSpecName: "util") pod "4d3afd39-fd39-4522-ae52-f7122f09e745" (UID: "4d3afd39-fd39-4522-ae52-f7122f09e745"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:20.639227 master-0 kubenswrapper[4409]: I1203 14:37:20.635480 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:20.639227 master-0 kubenswrapper[4409]: I1203 14:37:20.635545 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg7qp\" (UniqueName: \"kubernetes.io/projected/4d3afd39-fd39-4522-ae52-f7122f09e745-kube-api-access-jg7qp\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:20.639227 master-0 kubenswrapper[4409]: I1203 14:37:20.635563 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d3afd39-fd39-4522-ae52-f7122f09e745-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:21.083082 master-0 kubenswrapper[4409]: I1203 14:37:21.083018 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" event={"ID":"4d3afd39-fd39-4522-ae52-f7122f09e745","Type":"ContainerDied","Data":"ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49"} Dec 03 14:37:21.083792 master-0 kubenswrapper[4409]: I1203 14:37:21.083778 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5b60aa9673ffb3c22e49831900288b82215887db9cac108c014096cfbe9b49" Dec 03 14:37:21.083985 master-0 kubenswrapper[4409]: I1203 14:37:21.083063 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh888r" Dec 03 14:37:21.084979 master-0 kubenswrapper[4409]: I1203 14:37:21.084886 4409 generic.go:334] "Generic (PLEG): container finished" podID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerID="00cfce159ee740ce6a1f72357d363043dbf4b1f01c902bb1e19f9e18a9f23d0b" exitCode=0 Dec 03 14:37:21.085065 master-0 kubenswrapper[4409]: I1203 14:37:21.084978 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" event={"ID":"ac6a0863-f90a-4da1-8c65-06d1851b0174","Type":"ContainerDied","Data":"00cfce159ee740ce6a1f72357d363043dbf4b1f01c902bb1e19f9e18a9f23d0b"} Dec 03 14:37:22.095755 master-0 kubenswrapper[4409]: I1203 14:37:22.095698 4409 generic.go:334] "Generic (PLEG): container finished" podID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerID="445ce423068b894748f58ed9c80ebc9eb6b46c10d315aa8ae45a6f2417c1f95d" exitCode=0 Dec 03 14:37:22.095755 master-0 kubenswrapper[4409]: I1203 14:37:22.095760 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" event={"ID":"ac6a0863-f90a-4da1-8c65-06d1851b0174","Type":"ContainerDied","Data":"445ce423068b894748f58ed9c80ebc9eb6b46c10d315aa8ae45a6f2417c1f95d"} Dec 03 14:37:23.572665 master-0 kubenswrapper[4409]: I1203 14:37:23.572603 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:23.735929 master-0 kubenswrapper[4409]: I1203 14:37:23.735836 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mfkb\" (UniqueName: \"kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb\") pod \"ac6a0863-f90a-4da1-8c65-06d1851b0174\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " Dec 03 14:37:23.736244 master-0 kubenswrapper[4409]: I1203 14:37:23.736058 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle\") pod \"ac6a0863-f90a-4da1-8c65-06d1851b0174\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " Dec 03 14:37:23.736244 master-0 kubenswrapper[4409]: I1203 14:37:23.736084 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util\") pod \"ac6a0863-f90a-4da1-8c65-06d1851b0174\" (UID: \"ac6a0863-f90a-4da1-8c65-06d1851b0174\") " Dec 03 14:37:23.740030 master-0 kubenswrapper[4409]: I1203 14:37:23.739970 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle" (OuterVolumeSpecName: "bundle") pod "ac6a0863-f90a-4da1-8c65-06d1851b0174" (UID: "ac6a0863-f90a-4da1-8c65-06d1851b0174"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:23.743207 master-0 kubenswrapper[4409]: I1203 14:37:23.743155 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb" (OuterVolumeSpecName: "kube-api-access-5mfkb") pod "ac6a0863-f90a-4da1-8c65-06d1851b0174" (UID: "ac6a0863-f90a-4da1-8c65-06d1851b0174"). InnerVolumeSpecName "kube-api-access-5mfkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:37:23.751767 master-0 kubenswrapper[4409]: I1203 14:37:23.751704 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util" (OuterVolumeSpecName: "util") pod "ac6a0863-f90a-4da1-8c65-06d1851b0174" (UID: "ac6a0863-f90a-4da1-8c65-06d1851b0174"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:23.837872 master-0 kubenswrapper[4409]: I1203 14:37:23.837815 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mfkb\" (UniqueName: \"kubernetes.io/projected/ac6a0863-f90a-4da1-8c65-06d1851b0174-kube-api-access-5mfkb\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:23.838448 master-0 kubenswrapper[4409]: I1203 14:37:23.838423 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:23.838502 master-0 kubenswrapper[4409]: I1203 14:37:23.838451 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac6a0863-f90a-4da1-8c65-06d1851b0174-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:23.981585 master-0 kubenswrapper[4409]: I1203 14:37:23.981507 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:23.981585 master-0 kubenswrapper[4409]: I1203 14:37:23.981594 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:24.111793 master-0 kubenswrapper[4409]: I1203 14:37:24.111638 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" event={"ID":"ac6a0863-f90a-4da1-8c65-06d1851b0174","Type":"ContainerDied","Data":"5f65d94dff596f4e2e9a1d38e316c406ca95ee5d213f2f6d4de1b9af661e7943"} Dec 03 14:37:24.111793 master-0 kubenswrapper[4409]: I1203 14:37:24.111706 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f65d94dff596f4e2e9a1d38e316c406ca95ee5d213f2f6d4de1b9af661e7943" Dec 03 14:37:24.111793 master-0 kubenswrapper[4409]: I1203 14:37:24.111741 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102t7c9" Dec 03 14:37:25.028966 master-0 kubenswrapper[4409]: I1203 14:37:25.028886 4409 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jwkv" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="registry-server" probeResult="failure" output=< Dec 03 14:37:25.028966 master-0 kubenswrapper[4409]: timeout: failed to connect service ":50051" within 1s Dec 03 14:37:25.028966 master-0 kubenswrapper[4409]: > Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.488071 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7"] Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.488862 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.488895 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.488945 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.488955 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.488981 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.488992 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489102 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489116 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489141 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489149 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489172 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489180 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489202 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489213 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489234 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489243 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489268 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489278 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489296 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489304 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489332 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489341 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="util" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: E1203 14:37:31.489362 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489372 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="pull" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489750 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d3afd39-fd39-4522-ae52-f7122f09e745" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489788 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9a4a36-b310-4d20-8e41-f4ff9045bb06" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489805 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe9fa6f-d08e-4a39-9a5a-1f03b695c117" containerName="extract" Dec 03 14:37:31.490052 master-0 kubenswrapper[4409]: I1203 14:37:31.489824 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac6a0863-f90a-4da1-8c65-06d1851b0174" containerName="extract" Dec 03 14:37:31.492584 master-0 kubenswrapper[4409]: I1203 14:37:31.490826 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.500416 master-0 kubenswrapper[4409]: I1203 14:37:31.500346 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 03 14:37:31.504115 master-0 kubenswrapper[4409]: I1203 14:37:31.501717 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 03 14:37:31.540370 master-0 kubenswrapper[4409]: I1203 14:37:31.540308 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7"] Dec 03 14:37:31.593303 master-0 kubenswrapper[4409]: I1203 14:37:31.590555 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz75b\" (UniqueName: \"kubernetes.io/projected/68733ba7-554c-4d00-8b7c-1c247fb37a5d-kube-api-access-nz75b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.593303 master-0 kubenswrapper[4409]: I1203 14:37:31.590638 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68733ba7-554c-4d00-8b7c-1c247fb37a5d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.692786 master-0 kubenswrapper[4409]: I1203 14:37:31.692694 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz75b\" (UniqueName: \"kubernetes.io/projected/68733ba7-554c-4d00-8b7c-1c247fb37a5d-kube-api-access-nz75b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.692786 master-0 kubenswrapper[4409]: I1203 14:37:31.692788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68733ba7-554c-4d00-8b7c-1c247fb37a5d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.693491 master-0 kubenswrapper[4409]: I1203 14:37:31.693436 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/68733ba7-554c-4d00-8b7c-1c247fb37a5d-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.711798 master-0 kubenswrapper[4409]: I1203 14:37:31.711696 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz75b\" (UniqueName: \"kubernetes.io/projected/68733ba7-554c-4d00-8b7c-1c247fb37a5d-kube-api-access-nz75b\") pod \"cert-manager-operator-controller-manager-64cf6dff88-48jz7\" (UID: \"68733ba7-554c-4d00-8b7c-1c247fb37a5d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:31.834086 master-0 kubenswrapper[4409]: I1203 14:37:31.834024 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" Dec 03 14:37:32.361313 master-0 kubenswrapper[4409]: I1203 14:37:32.361261 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7"] Dec 03 14:37:33.212824 master-0 kubenswrapper[4409]: I1203 14:37:33.212757 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" event={"ID":"68733ba7-554c-4d00-8b7c-1c247fb37a5d","Type":"ContainerStarted","Data":"81c0a6359b1564c1ca5879149a9b0c05fc7c6318ba74f37b10115a0e05312447"} Dec 03 14:37:34.043975 master-0 kubenswrapper[4409]: I1203 14:37:34.043914 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:34.095636 master-0 kubenswrapper[4409]: I1203 14:37:34.095577 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:37.830707 master-0 kubenswrapper[4409]: I1203 14:37:37.830626 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:37.831629 master-0 kubenswrapper[4409]: I1203 14:37:37.830968 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jwkv" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="registry-server" containerID="cri-o://f9e527141a3029b6ca51c4a953c344811c78001305917acd2b5d41442aa0f2a1" gracePeriod=2 Dec 03 14:37:37.934169 master-0 kubenswrapper[4409]: I1203 14:37:37.933270 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5"] Dec 03 14:37:37.938144 master-0 kubenswrapper[4409]: I1203 14:37:37.936423 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" Dec 03 14:37:37.945363 master-0 kubenswrapper[4409]: I1203 14:37:37.943891 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 03 14:37:37.945363 master-0 kubenswrapper[4409]: I1203 14:37:37.943984 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 03 14:37:37.978317 master-0 kubenswrapper[4409]: I1203 14:37:37.975978 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5"] Dec 03 14:37:38.142171 master-0 kubenswrapper[4409]: I1203 14:37:38.141068 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcldb\" (UniqueName: \"kubernetes.io/projected/563d4899-8cf3-4ccc-965b-4b63573da5f7-kube-api-access-hcldb\") pod \"nmstate-operator-5b5b58f5c8-ddls5\" (UID: \"563d4899-8cf3-4ccc-965b-4b63573da5f7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" Dec 03 14:37:38.243147 master-0 kubenswrapper[4409]: I1203 14:37:38.243063 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcldb\" (UniqueName: \"kubernetes.io/projected/563d4899-8cf3-4ccc-965b-4b63573da5f7-kube-api-access-hcldb\") pod \"nmstate-operator-5b5b58f5c8-ddls5\" (UID: \"563d4899-8cf3-4ccc-965b-4b63573da5f7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" Dec 03 14:37:38.262272 master-0 kubenswrapper[4409]: I1203 14:37:38.261276 4409 generic.go:334] "Generic (PLEG): container finished" podID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerID="f9e527141a3029b6ca51c4a953c344811c78001305917acd2b5d41442aa0f2a1" exitCode=0 Dec 03 14:37:38.262272 master-0 kubenswrapper[4409]: I1203 14:37:38.261341 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerDied","Data":"f9e527141a3029b6ca51c4a953c344811c78001305917acd2b5d41442aa0f2a1"} Dec 03 14:37:38.265790 master-0 kubenswrapper[4409]: I1203 14:37:38.265750 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcldb\" (UniqueName: \"kubernetes.io/projected/563d4899-8cf3-4ccc-965b-4b63573da5f7-kube-api-access-hcldb\") pod \"nmstate-operator-5b5b58f5c8-ddls5\" (UID: \"563d4899-8cf3-4ccc-965b-4b63573da5f7\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" Dec 03 14:37:38.307052 master-0 kubenswrapper[4409]: I1203 14:37:38.306511 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" Dec 03 14:37:38.860254 master-0 kubenswrapper[4409]: I1203 14:37:38.860170 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s"] Dec 03 14:37:38.861266 master-0 kubenswrapper[4409]: I1203 14:37:38.861234 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:38.865649 master-0 kubenswrapper[4409]: I1203 14:37:38.865591 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 03 14:37:38.865975 master-0 kubenswrapper[4409]: I1203 14:37:38.865943 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 03 14:37:38.866321 master-0 kubenswrapper[4409]: I1203 14:37:38.866292 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 03 14:37:38.866446 master-0 kubenswrapper[4409]: I1203 14:37:38.866419 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 03 14:37:38.884843 master-0 kubenswrapper[4409]: I1203 14:37:38.884769 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s"] Dec 03 14:37:39.064178 master-0 kubenswrapper[4409]: I1203 14:37:39.064072 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-apiservice-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.064773 master-0 kubenswrapper[4409]: I1203 14:37:39.064736 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwqc\" (UniqueName: \"kubernetes.io/projected/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-kube-api-access-nqwqc\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.065002 master-0 kubenswrapper[4409]: I1203 14:37:39.064945 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-webhook-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.167902 master-0 kubenswrapper[4409]: I1203 14:37:39.167557 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-webhook-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.167902 master-0 kubenswrapper[4409]: I1203 14:37:39.167729 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-apiservice-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.167902 master-0 kubenswrapper[4409]: I1203 14:37:39.167789 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwqc\" (UniqueName: \"kubernetes.io/projected/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-kube-api-access-nqwqc\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.172398 master-0 kubenswrapper[4409]: I1203 14:37:39.172344 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-apiservice-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.174098 master-0 kubenswrapper[4409]: I1203 14:37:39.174042 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-webhook-cert\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.274364 master-0 kubenswrapper[4409]: I1203 14:37:39.271577 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwqc\" (UniqueName: \"kubernetes.io/projected/46923c3d-e14a-49da-9ea1-34a6c7ae5ff1-kube-api-access-nqwqc\") pod \"metallb-operator-controller-manager-d988cbf4b-f589s\" (UID: \"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1\") " pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.537665 master-0 kubenswrapper[4409]: I1203 14:37:39.537535 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:39.555725 master-0 kubenswrapper[4409]: I1203 14:37:39.555637 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt"] Dec 03 14:37:39.557100 master-0 kubenswrapper[4409]: I1203 14:37:39.557062 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.574948 master-0 kubenswrapper[4409]: I1203 14:37:39.570522 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 03 14:37:39.574948 master-0 kubenswrapper[4409]: I1203 14:37:39.570762 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 14:37:39.644108 master-0 kubenswrapper[4409]: I1203 14:37:39.641587 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt"] Dec 03 14:37:39.685032 master-0 kubenswrapper[4409]: I1203 14:37:39.684427 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbqv\" (UniqueName: \"kubernetes.io/projected/884460b9-240c-49c8-bbab-8eb8cd66c30b-kube-api-access-ztbqv\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.685032 master-0 kubenswrapper[4409]: I1203 14:37:39.684523 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-webhook-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.685032 master-0 kubenswrapper[4409]: I1203 14:37:39.684556 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-apiservice-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.793154 master-0 kubenswrapper[4409]: I1203 14:37:39.786692 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbqv\" (UniqueName: \"kubernetes.io/projected/884460b9-240c-49c8-bbab-8eb8cd66c30b-kube-api-access-ztbqv\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.793154 master-0 kubenswrapper[4409]: I1203 14:37:39.786780 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-webhook-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.793154 master-0 kubenswrapper[4409]: I1203 14:37:39.786803 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-apiservice-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.793154 master-0 kubenswrapper[4409]: I1203 14:37:39.792258 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-apiservice-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.802149 master-0 kubenswrapper[4409]: I1203 14:37:39.795695 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/884460b9-240c-49c8-bbab-8eb8cd66c30b-webhook-cert\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.808307 master-0 kubenswrapper[4409]: I1203 14:37:39.808229 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbqv\" (UniqueName: \"kubernetes.io/projected/884460b9-240c-49c8-bbab-8eb8cd66c30b-kube-api-access-ztbqv\") pod \"metallb-operator-webhook-server-7d9bbdcff5-k5wlt\" (UID: \"884460b9-240c-49c8-bbab-8eb8cd66c30b\") " pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:39.886278 master-0 kubenswrapper[4409]: I1203 14:37:39.886188 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:42.041035 master-0 kubenswrapper[4409]: I1203 14:37:42.040886 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:42.183665 master-0 kubenswrapper[4409]: I1203 14:37:42.161595 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities\") pod \"48248c13-ef92-44f0-8bd7-f6163adedd6d\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " Dec 03 14:37:42.183665 master-0 kubenswrapper[4409]: I1203 14:37:42.161664 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dkx\" (UniqueName: \"kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx\") pod \"48248c13-ef92-44f0-8bd7-f6163adedd6d\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " Dec 03 14:37:42.183665 master-0 kubenswrapper[4409]: I1203 14:37:42.161720 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content\") pod \"48248c13-ef92-44f0-8bd7-f6163adedd6d\" (UID: \"48248c13-ef92-44f0-8bd7-f6163adedd6d\") " Dec 03 14:37:42.183665 master-0 kubenswrapper[4409]: I1203 14:37:42.162739 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities" (OuterVolumeSpecName: "utilities") pod "48248c13-ef92-44f0-8bd7-f6163adedd6d" (UID: "48248c13-ef92-44f0-8bd7-f6163adedd6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:42.210305 master-0 kubenswrapper[4409]: I1203 14:37:42.189334 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx" (OuterVolumeSpecName: "kube-api-access-d8dkx") pod "48248c13-ef92-44f0-8bd7-f6163adedd6d" (UID: "48248c13-ef92-44f0-8bd7-f6163adedd6d"). InnerVolumeSpecName "kube-api-access-d8dkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:37:42.267101 master-0 kubenswrapper[4409]: I1203 14:37:42.264256 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:42.267101 master-0 kubenswrapper[4409]: I1203 14:37:42.264314 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dkx\" (UniqueName: \"kubernetes.io/projected/48248c13-ef92-44f0-8bd7-f6163adedd6d-kube-api-access-d8dkx\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:42.314292 master-0 kubenswrapper[4409]: I1203 14:37:42.313966 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jwkv" event={"ID":"48248c13-ef92-44f0-8bd7-f6163adedd6d","Type":"ContainerDied","Data":"fc05710759e03f6e667f113e699f34a5b96735771e69dee1563a670c263f7b4e"} Dec 03 14:37:42.314292 master-0 kubenswrapper[4409]: I1203 14:37:42.314089 4409 scope.go:117] "RemoveContainer" containerID="f9e527141a3029b6ca51c4a953c344811c78001305917acd2b5d41442aa0f2a1" Dec 03 14:37:42.314292 master-0 kubenswrapper[4409]: I1203 14:37:42.314256 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jwkv" Dec 03 14:37:42.349259 master-0 kubenswrapper[4409]: I1203 14:37:42.349209 4409 scope.go:117] "RemoveContainer" containerID="1deda178a218529a38a8cc47c2aacc9fd472c39ec3fc0c634cf1d3bb3597414e" Dec 03 14:37:42.400286 master-0 kubenswrapper[4409]: I1203 14:37:42.399597 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48248c13-ef92-44f0-8bd7-f6163adedd6d" (UID: "48248c13-ef92-44f0-8bd7-f6163adedd6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:37:42.404963 master-0 kubenswrapper[4409]: I1203 14:37:42.404922 4409 scope.go:117] "RemoveContainer" containerID="415295fe70809ed1016f97975f236fdd92ea2df4eb152d34079156ba823048a4" Dec 03 14:37:42.520168 master-0 kubenswrapper[4409]: I1203 14:37:42.519986 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48248c13-ef92-44f0-8bd7-f6163adedd6d-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:37:42.699308 master-0 kubenswrapper[4409]: I1203 14:37:42.698798 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:42.709182 master-0 kubenswrapper[4409]: I1203 14:37:42.709121 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt"] Dec 03 14:37:42.716369 master-0 kubenswrapper[4409]: I1203 14:37:42.716164 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jwkv"] Dec 03 14:37:42.727410 master-0 kubenswrapper[4409]: W1203 14:37:42.727267 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod884460b9_240c_49c8_bbab_8eb8cd66c30b.slice/crio-b9b8725f8c1e6384f92aa1a8e763e550427d020bda9891da7d6b603ba589d4b2 WatchSource:0}: Error finding container b9b8725f8c1e6384f92aa1a8e763e550427d020bda9891da7d6b603ba589d4b2: Status 404 returned error can't find the container with id b9b8725f8c1e6384f92aa1a8e763e550427d020bda9891da7d6b603ba589d4b2 Dec 03 14:37:42.924035 master-0 kubenswrapper[4409]: I1203 14:37:42.922463 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s"] Dec 03 14:37:42.956047 master-0 kubenswrapper[4409]: I1203 14:37:42.951733 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5"] Dec 03 14:37:43.333190 master-0 kubenswrapper[4409]: I1203 14:37:43.332250 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" event={"ID":"884460b9-240c-49c8-bbab-8eb8cd66c30b","Type":"ContainerStarted","Data":"b9b8725f8c1e6384f92aa1a8e763e550427d020bda9891da7d6b603ba589d4b2"} Dec 03 14:37:43.343551 master-0 kubenswrapper[4409]: I1203 14:37:43.343453 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" event={"ID":"563d4899-8cf3-4ccc-965b-4b63573da5f7","Type":"ContainerStarted","Data":"f57757e4bb137f0da496ac52a4c4be3f6d908bc5f59818b2b8ab2c7d5a0ad4bf"} Dec 03 14:37:43.347289 master-0 kubenswrapper[4409]: I1203 14:37:43.346431 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" event={"ID":"68733ba7-554c-4d00-8b7c-1c247fb37a5d","Type":"ContainerStarted","Data":"72adf0d02dd8271ebaee479a76a2df28c97ad5b15dc3842faaa5cb27a135aa2d"} Dec 03 14:37:43.360361 master-0 kubenswrapper[4409]: I1203 14:37:43.360273 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" event={"ID":"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1","Type":"ContainerStarted","Data":"971d44c8ae276a337acad039f485f21602f7b55ba99d7a56e1689dfcbb8952aa"} Dec 03 14:37:43.388913 master-0 kubenswrapper[4409]: I1203 14:37:43.388799 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-48jz7" podStartSLOduration=2.666451588 podStartE2EDuration="12.388704857s" podCreationTimestamp="2025-12-03 14:37:31 +0000 UTC" firstStartedPulling="2025-12-03 14:37:32.366546857 +0000 UTC m=+684.693609363" lastFinishedPulling="2025-12-03 14:37:42.088800126 +0000 UTC m=+694.415862632" observedRunningTime="2025-12-03 14:37:43.385152085 +0000 UTC m=+695.712214601" watchObservedRunningTime="2025-12-03 14:37:43.388704857 +0000 UTC m=+695.715767363" Dec 03 14:37:43.831096 master-0 kubenswrapper[4409]: I1203 14:37:43.831048 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" path="/var/lib/kubelet/pods/48248c13-ef92-44f0-8bd7-f6163adedd6d/volumes" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.770216 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kfv5q"] Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: E1203 14:37:45.770716 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="extract-content" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.770733 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="extract-content" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: E1203 14:37:45.770743 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="registry-server" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.770749 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="registry-server" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: E1203 14:37:45.770766 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="extract-utilities" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.770775 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="extract-utilities" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.771034 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="48248c13-ef92-44f0-8bd7-f6163adedd6d" containerName="registry-server" Dec 03 14:37:45.772563 master-0 kubenswrapper[4409]: I1203 14:37:45.771753 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:45.789354 master-0 kubenswrapper[4409]: I1203 14:37:45.787757 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 03 14:37:45.789354 master-0 kubenswrapper[4409]: I1203 14:37:45.788480 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 03 14:37:45.803365 master-0 kubenswrapper[4409]: I1203 14:37:45.803311 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kfv5q"] Dec 03 14:37:45.843949 master-0 kubenswrapper[4409]: I1203 14:37:45.843854 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr55x\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-kube-api-access-lr55x\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:45.844220 master-0 kubenswrapper[4409]: I1203 14:37:45.843988 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:45.945635 master-0 kubenswrapper[4409]: I1203 14:37:45.945571 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr55x\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-kube-api-access-lr55x\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:45.945894 master-0 kubenswrapper[4409]: I1203 14:37:45.945730 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:46.503730 master-0 kubenswrapper[4409]: I1203 14:37:46.503390 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr55x\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-kube-api-access-lr55x\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:46.514235 master-0 kubenswrapper[4409]: I1203 14:37:46.513653 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d7297bea-bca8-4adb-a0b5-329abb9361ef-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kfv5q\" (UID: \"d7297bea-bca8-4adb-a0b5-329abb9361ef\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:46.720429 master-0 kubenswrapper[4409]: I1203 14:37:46.720289 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:37:49.479042 master-0 kubenswrapper[4409]: I1203 14:37:49.476623 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd"] Dec 03 14:37:49.479042 master-0 kubenswrapper[4409]: I1203 14:37:49.478896 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.492031 master-0 kubenswrapper[4409]: I1203 14:37:49.489885 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hxr\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-kube-api-access-w5hxr\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.492031 master-0 kubenswrapper[4409]: I1203 14:37:49.490486 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.528034 master-0 kubenswrapper[4409]: I1203 14:37:49.521204 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd"] Dec 03 14:37:49.593031 master-0 kubenswrapper[4409]: I1203 14:37:49.591420 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5hxr\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-kube-api-access-w5hxr\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.593031 master-0 kubenswrapper[4409]: I1203 14:37:49.591546 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.680372 master-0 kubenswrapper[4409]: I1203 14:37:49.680299 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.697403 master-0 kubenswrapper[4409]: I1203 14:37:49.695349 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5hxr\" (UniqueName: \"kubernetes.io/projected/dba74507-8eb4-4eb4-bc0b-deffbea958d1-kube-api-access-w5hxr\") pod \"cert-manager-cainjector-855d9ccff4-9qmpd\" (UID: \"dba74507-8eb4-4eb4-bc0b-deffbea958d1\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:49.943521 master-0 kubenswrapper[4409]: I1203 14:37:49.943458 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" Dec 03 14:37:53.554987 master-0 kubenswrapper[4409]: I1203 14:37:53.553155 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" event={"ID":"46923c3d-e14a-49da-9ea1-34a6c7ae5ff1","Type":"ContainerStarted","Data":"6d51c0e6fc5fe667d9761db613805e96c13e80cf8b6349d9e84b8f12f6a44932"} Dec 03 14:37:53.555534 master-0 kubenswrapper[4409]: I1203 14:37:53.555057 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:37:53.558472 master-0 kubenswrapper[4409]: I1203 14:37:53.557692 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" event={"ID":"884460b9-240c-49c8-bbab-8eb8cd66c30b","Type":"ContainerStarted","Data":"d96a7b1ea384d5d151d0abde0080ef894931856e28f274dcbded8a43fa6604ed"} Dec 03 14:37:53.558472 master-0 kubenswrapper[4409]: I1203 14:37:53.558251 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:37:53.591533 master-0 kubenswrapper[4409]: I1203 14:37:53.591389 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" podStartSLOduration=5.279428859 podStartE2EDuration="15.591358858s" podCreationTimestamp="2025-12-03 14:37:38 +0000 UTC" firstStartedPulling="2025-12-03 14:37:42.91582987 +0000 UTC m=+695.242892366" lastFinishedPulling="2025-12-03 14:37:53.227759859 +0000 UTC m=+705.554822365" observedRunningTime="2025-12-03 14:37:53.590560365 +0000 UTC m=+705.917622891" watchObservedRunningTime="2025-12-03 14:37:53.591358858 +0000 UTC m=+705.918421364" Dec 03 14:37:53.638067 master-0 kubenswrapper[4409]: I1203 14:37:53.636137 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" podStartSLOduration=4.131642227 podStartE2EDuration="14.636107303s" podCreationTimestamp="2025-12-03 14:37:39 +0000 UTC" firstStartedPulling="2025-12-03 14:37:42.730967603 +0000 UTC m=+695.058030109" lastFinishedPulling="2025-12-03 14:37:53.235432679 +0000 UTC m=+705.562495185" observedRunningTime="2025-12-03 14:37:53.634333272 +0000 UTC m=+705.961395768" watchObservedRunningTime="2025-12-03 14:37:53.636107303 +0000 UTC m=+705.963169809" Dec 03 14:37:54.214427 master-0 kubenswrapper[4409]: W1203 14:37:54.207263 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7297bea_bca8_4adb_a0b5_329abb9361ef.slice/crio-a95c14586307e9f1222634424d1d32a5d2b2cae4ddd123db9c886f6e11360296 WatchSource:0}: Error finding container a95c14586307e9f1222634424d1d32a5d2b2cae4ddd123db9c886f6e11360296: Status 404 returned error can't find the container with id a95c14586307e9f1222634424d1d32a5d2b2cae4ddd123db9c886f6e11360296 Dec 03 14:37:54.214427 master-0 kubenswrapper[4409]: I1203 14:37:54.214093 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd"] Dec 03 14:37:54.238516 master-0 kubenswrapper[4409]: I1203 14:37:54.238454 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kfv5q"] Dec 03 14:37:54.515194 master-0 kubenswrapper[4409]: I1203 14:37:54.514992 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5"] Dec 03 14:37:54.516060 master-0 kubenswrapper[4409]: I1203 14:37:54.515975 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" Dec 03 14:37:54.520057 master-0 kubenswrapper[4409]: I1203 14:37:54.519986 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 03 14:37:54.520292 master-0 kubenswrapper[4409]: I1203 14:37:54.520187 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 03 14:37:54.572472 master-0 kubenswrapper[4409]: I1203 14:37:54.572346 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzfgw\" (UniqueName: \"kubernetes.io/projected/85715c88-8e08-4ff4-9d38-44a66160639f-kube-api-access-mzfgw\") pod \"obo-prometheus-operator-668cf9dfbb-4dft5\" (UID: \"85715c88-8e08-4ff4-9d38-44a66160639f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" Dec 03 14:37:54.585148 master-0 kubenswrapper[4409]: I1203 14:37:54.584287 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5"] Dec 03 14:37:54.592127 master-0 kubenswrapper[4409]: I1203 14:37:54.591664 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" event={"ID":"d7297bea-bca8-4adb-a0b5-329abb9361ef","Type":"ContainerStarted","Data":"a95c14586307e9f1222634424d1d32a5d2b2cae4ddd123db9c886f6e11360296"} Dec 03 14:37:54.594438 master-0 kubenswrapper[4409]: I1203 14:37:54.594218 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" event={"ID":"563d4899-8cf3-4ccc-965b-4b63573da5f7","Type":"ContainerStarted","Data":"3ddcf5d6d7c7c66ca59e6591bcd49e0d8bc835f52b411302ad778011366a2316"} Dec 03 14:37:54.595967 master-0 kubenswrapper[4409]: I1203 14:37:54.595898 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" event={"ID":"dba74507-8eb4-4eb4-bc0b-deffbea958d1","Type":"ContainerStarted","Data":"a429da2385d74ab3beced15889a3f78cfc7eb5a313034c6622b74dbfbb0bef3c"} Dec 03 14:37:54.640653 master-0 kubenswrapper[4409]: I1203 14:37:54.640541 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-ddls5" podStartSLOduration=7.347453393 podStartE2EDuration="17.64051692s" podCreationTimestamp="2025-12-03 14:37:37 +0000 UTC" firstStartedPulling="2025-12-03 14:37:42.943240807 +0000 UTC m=+695.270303313" lastFinishedPulling="2025-12-03 14:37:53.236304334 +0000 UTC m=+705.563366840" observedRunningTime="2025-12-03 14:37:54.640208731 +0000 UTC m=+706.967271237" watchObservedRunningTime="2025-12-03 14:37:54.64051692 +0000 UTC m=+706.967579436" Dec 03 14:37:54.685034 master-0 kubenswrapper[4409]: I1203 14:37:54.682037 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzfgw\" (UniqueName: \"kubernetes.io/projected/85715c88-8e08-4ff4-9d38-44a66160639f-kube-api-access-mzfgw\") pod \"obo-prometheus-operator-668cf9dfbb-4dft5\" (UID: \"85715c88-8e08-4ff4-9d38-44a66160639f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" Dec 03 14:37:54.709159 master-0 kubenswrapper[4409]: I1203 14:37:54.708321 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzfgw\" (UniqueName: \"kubernetes.io/projected/85715c88-8e08-4ff4-9d38-44a66160639f-kube-api-access-mzfgw\") pod \"obo-prometheus-operator-668cf9dfbb-4dft5\" (UID: \"85715c88-8e08-4ff4-9d38-44a66160639f\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" Dec 03 14:37:54.768547 master-0 kubenswrapper[4409]: I1203 14:37:54.765085 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg"] Dec 03 14:37:54.768547 master-0 kubenswrapper[4409]: I1203 14:37:54.766943 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:54.772577 master-0 kubenswrapper[4409]: I1203 14:37:54.770946 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 03 14:37:54.779044 master-0 kubenswrapper[4409]: I1203 14:37:54.776685 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf"] Dec 03 14:37:54.779044 master-0 kubenswrapper[4409]: I1203 14:37:54.778631 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:54.804257 master-0 kubenswrapper[4409]: I1203 14:37:54.804122 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg"] Dec 03 14:37:54.805536 master-0 kubenswrapper[4409]: I1203 14:37:54.805463 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf"] Dec 03 14:37:54.844549 master-0 kubenswrapper[4409]: I1203 14:37:54.843448 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" Dec 03 14:37:54.889479 master-0 kubenswrapper[4409]: I1203 14:37:54.889361 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:54.890080 master-0 kubenswrapper[4409]: I1203 14:37:54.889784 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:54.890080 master-0 kubenswrapper[4409]: I1203 14:37:54.889969 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:54.890949 master-0 kubenswrapper[4409]: I1203 14:37:54.890231 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:54.992768 master-0 kubenswrapper[4409]: I1203 14:37:54.991945 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:54.992768 master-0 kubenswrapper[4409]: I1203 14:37:54.992592 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:54.992768 master-0 kubenswrapper[4409]: I1203 14:37:54.992745 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:54.992768 master-0 kubenswrapper[4409]: I1203 14:37:54.992778 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:55.002402 master-0 kubenswrapper[4409]: I1203 14:37:55.002283 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:55.006402 master-0 kubenswrapper[4409]: I1203 14:37:55.005990 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:55.006402 master-0 kubenswrapper[4409]: I1203 14:37:55.006255 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf\" (UID: \"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:55.010169 master-0 kubenswrapper[4409]: I1203 14:37:55.010104 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-25nhp"] Dec 03 14:37:55.049020 master-0 kubenswrapper[4409]: I1203 14:37:55.003754 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ada54cf9-aeb6-4200-a41d-27d6ef6e6a19-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg\" (UID: \"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:55.083514 master-0 kubenswrapper[4409]: I1203 14:37:55.082466 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-25nhp"] Dec 03 14:37:55.084849 master-0 kubenswrapper[4409]: I1203 14:37:55.084801 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.102555 master-0 kubenswrapper[4409]: I1203 14:37:55.101653 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 03 14:37:55.154704 master-0 kubenswrapper[4409]: I1203 14:37:55.154148 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" Dec 03 14:37:55.158077 master-0 kubenswrapper[4409]: I1203 14:37:55.158026 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-vxj9b"] Dec 03 14:37:55.159262 master-0 kubenswrapper[4409]: I1203 14:37:55.159230 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.174226 master-0 kubenswrapper[4409]: I1203 14:37:55.173762 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-vxj9b"] Dec 03 14:37:55.196550 master-0 kubenswrapper[4409]: I1203 14:37:55.196447 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8f07b98-899d-4186-8a76-ac583e3b9ab9-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.196807 master-0 kubenswrapper[4409]: I1203 14:37:55.196564 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcq9v\" (UniqueName: \"kubernetes.io/projected/93525cd4-2d23-46d1-99c7-7e18528b9c8b-kube-api-access-vcq9v\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.196807 master-0 kubenswrapper[4409]: I1203 14:37:55.196592 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/93525cd4-2d23-46d1-99c7-7e18528b9c8b-openshift-service-ca\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.196807 master-0 kubenswrapper[4409]: I1203 14:37:55.196639 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt56h\" (UniqueName: \"kubernetes.io/projected/f8f07b98-899d-4186-8a76-ac583e3b9ab9-kube-api-access-wt56h\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.226445 master-0 kubenswrapper[4409]: I1203 14:37:55.225804 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" Dec 03 14:37:55.299077 master-0 kubenswrapper[4409]: I1203 14:37:55.298833 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8f07b98-899d-4186-8a76-ac583e3b9ab9-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.299077 master-0 kubenswrapper[4409]: I1203 14:37:55.298943 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcq9v\" (UniqueName: \"kubernetes.io/projected/93525cd4-2d23-46d1-99c7-7e18528b9c8b-kube-api-access-vcq9v\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.299077 master-0 kubenswrapper[4409]: I1203 14:37:55.298976 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/93525cd4-2d23-46d1-99c7-7e18528b9c8b-openshift-service-ca\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.299077 master-0 kubenswrapper[4409]: I1203 14:37:55.299044 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt56h\" (UniqueName: \"kubernetes.io/projected/f8f07b98-899d-4186-8a76-ac583e3b9ab9-kube-api-access-wt56h\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.302654 master-0 kubenswrapper[4409]: I1203 14:37:55.302608 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/93525cd4-2d23-46d1-99c7-7e18528b9c8b-openshift-service-ca\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.309835 master-0 kubenswrapper[4409]: I1203 14:37:55.309768 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f8f07b98-899d-4186-8a76-ac583e3b9ab9-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.335306 master-0 kubenswrapper[4409]: I1203 14:37:55.334565 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcq9v\" (UniqueName: \"kubernetes.io/projected/93525cd4-2d23-46d1-99c7-7e18528b9c8b-kube-api-access-vcq9v\") pod \"perses-operator-5446b9c989-vxj9b\" (UID: \"93525cd4-2d23-46d1-99c7-7e18528b9c8b\") " pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:55.346932 master-0 kubenswrapper[4409]: I1203 14:37:55.346874 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt56h\" (UniqueName: \"kubernetes.io/projected/f8f07b98-899d-4186-8a76-ac583e3b9ab9-kube-api-access-wt56h\") pod \"observability-operator-d8bb48f5d-25nhp\" (UID: \"f8f07b98-899d-4186-8a76-ac583e3b9ab9\") " pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.465672 master-0 kubenswrapper[4409]: I1203 14:37:55.465591 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:37:55.504170 master-0 kubenswrapper[4409]: I1203 14:37:55.504087 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:37:59.296636 master-0 kubenswrapper[4409]: I1203 14:37:59.296562 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5"] Dec 03 14:37:59.303375 master-0 kubenswrapper[4409]: W1203 14:37:59.303308 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85715c88_8e08_4ff4_9d38_44a66160639f.slice/crio-0a3bd595e0bb45f31a15624732e5c32411b31724dbf1ac3fb265c2826777e931 WatchSource:0}: Error finding container 0a3bd595e0bb45f31a15624732e5c32411b31724dbf1ac3fb265c2826777e931: Status 404 returned error can't find the container with id 0a3bd595e0bb45f31a15624732e5c32411b31724dbf1ac3fb265c2826777e931 Dec 03 14:37:59.651861 master-0 kubenswrapper[4409]: I1203 14:37:59.651773 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" event={"ID":"85715c88-8e08-4ff4-9d38-44a66160639f","Type":"ContainerStarted","Data":"0a3bd595e0bb45f31a15624732e5c32411b31724dbf1ac3fb265c2826777e931"} Dec 03 14:38:00.043484 master-0 kubenswrapper[4409]: I1203 14:38:00.043360 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-k7j45"] Dec 03 14:38:00.050620 master-0 kubenswrapper[4409]: I1203 14:38:00.050551 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.052069 master-0 kubenswrapper[4409]: I1203 14:38:00.051990 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-bound-sa-token\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.052204 master-0 kubenswrapper[4409]: I1203 14:38:00.052118 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcgcj\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-kube-api-access-rcgcj\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.129728 master-0 kubenswrapper[4409]: I1203 14:38:00.127442 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-k7j45"] Dec 03 14:38:00.129932 master-0 kubenswrapper[4409]: W1203 14:38:00.129809 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada54cf9_aeb6_4200_a41d_27d6ef6e6a19.slice/crio-e5d943984f85ccffe131213ebd41197b0fc1186d9f9d043edde59508f45d6bce WatchSource:0}: Error finding container e5d943984f85ccffe131213ebd41197b0fc1186d9f9d043edde59508f45d6bce: Status 404 returned error can't find the container with id e5d943984f85ccffe131213ebd41197b0fc1186d9f9d043edde59508f45d6bce Dec 03 14:38:00.161805 master-0 kubenswrapper[4409]: W1203 14:38:00.161705 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8f07b98_899d_4186_8a76_ac583e3b9ab9.slice/crio-cc81efd4ec29530c6f0a6a681c029063ab989990a28919bff7312d445a079034 WatchSource:0}: Error finding container cc81efd4ec29530c6f0a6a681c029063ab989990a28919bff7312d445a079034: Status 404 returned error can't find the container with id cc81efd4ec29530c6f0a6a681c029063ab989990a28919bff7312d445a079034 Dec 03 14:38:00.165070 master-0 kubenswrapper[4409]: I1203 14:38:00.164960 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-bound-sa-token\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.165148 master-0 kubenswrapper[4409]: I1203 14:38:00.165068 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcgcj\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-kube-api-access-rcgcj\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.226039 master-0 kubenswrapper[4409]: I1203 14:38:00.217187 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf"] Dec 03 14:38:00.245049 master-0 kubenswrapper[4409]: I1203 14:38:00.233238 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg"] Dec 03 14:38:00.267649 master-0 kubenswrapper[4409]: I1203 14:38:00.267085 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-vxj9b"] Dec 03 14:38:00.297542 master-0 kubenswrapper[4409]: I1203 14:38:00.297289 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-25nhp"] Dec 03 14:38:00.359120 master-0 kubenswrapper[4409]: I1203 14:38:00.359064 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-bound-sa-token\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.369144 master-0 kubenswrapper[4409]: I1203 14:38:00.366904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcgcj\" (UniqueName: \"kubernetes.io/projected/cbd1184d-4f01-4814-ae46-4a786e605a41-kube-api-access-rcgcj\") pod \"cert-manager-86cb77c54b-k7j45\" (UID: \"cbd1184d-4f01-4814-ae46-4a786e605a41\") " pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.382395 master-0 kubenswrapper[4409]: I1203 14:38:00.382283 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-k7j45" Dec 03 14:38:00.683297 master-0 kubenswrapper[4409]: I1203 14:38:00.683196 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" event={"ID":"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19","Type":"ContainerStarted","Data":"e5d943984f85ccffe131213ebd41197b0fc1186d9f9d043edde59508f45d6bce"} Dec 03 14:38:00.686748 master-0 kubenswrapper[4409]: I1203 14:38:00.686658 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" event={"ID":"f8f07b98-899d-4186-8a76-ac583e3b9ab9","Type":"ContainerStarted","Data":"cc81efd4ec29530c6f0a6a681c029063ab989990a28919bff7312d445a079034"} Dec 03 14:38:00.688922 master-0 kubenswrapper[4409]: I1203 14:38:00.688880 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" event={"ID":"93525cd4-2d23-46d1-99c7-7e18528b9c8b","Type":"ContainerStarted","Data":"14e5db6c51db469c6f572a3cd5f0067ad7d8370a745ca206dcba871f5034a6f8"} Dec 03 14:38:00.693201 master-0 kubenswrapper[4409]: I1203 14:38:00.693157 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" event={"ID":"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b","Type":"ContainerStarted","Data":"21c4b1707dad4c5b80ecb68221b3db5ba4ffcc0103403908941c7b443aa17383"} Dec 03 14:38:06.189735 master-0 kubenswrapper[4409]: I1203 14:38:06.189614 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-k7j45"] Dec 03 14:38:07.440194 master-0 kubenswrapper[4409]: W1203 14:38:07.440131 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbd1184d_4f01_4814_ae46_4a786e605a41.slice/crio-3b52845e3a620a7b5e8643dd051d6a376331080112f84162eaa41791219d2758 WatchSource:0}: Error finding container 3b52845e3a620a7b5e8643dd051d6a376331080112f84162eaa41791219d2758: Status 404 returned error can't find the container with id 3b52845e3a620a7b5e8643dd051d6a376331080112f84162eaa41791219d2758 Dec 03 14:38:07.801508 master-0 kubenswrapper[4409]: I1203 14:38:07.801403 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-k7j45" event={"ID":"cbd1184d-4f01-4814-ae46-4a786e605a41","Type":"ContainerStarted","Data":"3b52845e3a620a7b5e8643dd051d6a376331080112f84162eaa41791219d2758"} Dec 03 14:38:08.812866 master-0 kubenswrapper[4409]: I1203 14:38:08.812812 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-k7j45" event={"ID":"cbd1184d-4f01-4814-ae46-4a786e605a41","Type":"ContainerStarted","Data":"62d5b8bb83ec288c37045028daaed879547f9c3df74a9c0078f2f0aa1534a315"} Dec 03 14:38:08.816048 master-0 kubenswrapper[4409]: I1203 14:38:08.815885 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" event={"ID":"ada54cf9-aeb6-4200-a41d-27d6ef6e6a19","Type":"ContainerStarted","Data":"446b862d1de004a22bf4983d7ceeed30e934789296f83c639357a0b72f308bc5"} Dec 03 14:38:08.817700 master-0 kubenswrapper[4409]: I1203 14:38:08.817655 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" event={"ID":"d7297bea-bca8-4adb-a0b5-329abb9361ef","Type":"ContainerStarted","Data":"76a060906d16d0c0d905dd62fb2f3e7d990b2c001f4c4e36d7034fe3e10f0904"} Dec 03 14:38:08.817777 master-0 kubenswrapper[4409]: I1203 14:38:08.817740 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:38:08.819725 master-0 kubenswrapper[4409]: I1203 14:38:08.819694 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" event={"ID":"93525cd4-2d23-46d1-99c7-7e18528b9c8b","Type":"ContainerStarted","Data":"931556705cdb38798661fbb7cecc5c21a29c4e45f83c7a7328d2ed58e3a8037e"} Dec 03 14:38:08.819800 master-0 kubenswrapper[4409]: I1203 14:38:08.819778 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:38:08.821506 master-0 kubenswrapper[4409]: I1203 14:38:08.821467 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" event={"ID":"dba74507-8eb4-4eb4-bc0b-deffbea958d1","Type":"ContainerStarted","Data":"1220ead262e9cc44cc2d59a3d7b65b7b4baa741d78c0a55e4b61ee799f6801c7"} Dec 03 14:38:08.823967 master-0 kubenswrapper[4409]: I1203 14:38:08.823916 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" event={"ID":"c1bdc1cf-4a79-4ec4-bf61-f83cf92c997b","Type":"ContainerStarted","Data":"9d4e6ad6b008f119510ee7bf3b4598b6b933debf896328f649bfa9f12d4044e2"} Dec 03 14:38:08.825687 master-0 kubenswrapper[4409]: I1203 14:38:08.825625 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" event={"ID":"85715c88-8e08-4ff4-9d38-44a66160639f","Type":"ContainerStarted","Data":"b372f0575d5f4c4ecb598add9116d8263c201d23e2224a55e3ac6f8be5a36d5d"} Dec 03 14:38:09.436406 master-0 kubenswrapper[4409]: I1203 14:38:09.436236 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-k7j45" podStartSLOduration=14.436192678 podStartE2EDuration="14.436192678s" podCreationTimestamp="2025-12-03 14:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:38:09.430409732 +0000 UTC m=+721.757472248" watchObservedRunningTime="2025-12-03 14:38:09.436192678 +0000 UTC m=+721.763255184" Dec 03 14:38:09.496760 master-0 kubenswrapper[4409]: I1203 14:38:09.495828 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" podStartSLOduration=7.019060171 podStartE2EDuration="14.49579747s" podCreationTimestamp="2025-12-03 14:37:55 +0000 UTC" firstStartedPulling="2025-12-03 14:38:00.15423301 +0000 UTC m=+712.481295526" lastFinishedPulling="2025-12-03 14:38:07.630970319 +0000 UTC m=+719.958032825" observedRunningTime="2025-12-03 14:38:09.477214896 +0000 UTC m=+721.804277412" watchObservedRunningTime="2025-12-03 14:38:09.49579747 +0000 UTC m=+721.822859976" Dec 03 14:38:09.541600 master-0 kubenswrapper[4409]: I1203 14:38:09.541076 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-xfkxf" podStartSLOduration=8.033595228 podStartE2EDuration="15.540548104s" podCreationTimestamp="2025-12-03 14:37:54 +0000 UTC" firstStartedPulling="2025-12-03 14:38:00.150426401 +0000 UTC m=+712.477488907" lastFinishedPulling="2025-12-03 14:38:07.657379277 +0000 UTC m=+719.984441783" observedRunningTime="2025-12-03 14:38:09.519880351 +0000 UTC m=+721.846942867" watchObservedRunningTime="2025-12-03 14:38:09.540548104 +0000 UTC m=+721.867610610" Dec 03 14:38:09.556081 master-0 kubenswrapper[4409]: I1203 14:38:09.555882 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" podStartSLOduration=11.14798533 podStartE2EDuration="24.555829393s" podCreationTimestamp="2025-12-03 14:37:45 +0000 UTC" firstStartedPulling="2025-12-03 14:37:54.224708492 +0000 UTC m=+706.551770998" lastFinishedPulling="2025-12-03 14:38:07.632552555 +0000 UTC m=+719.959615061" observedRunningTime="2025-12-03 14:38:09.549049138 +0000 UTC m=+721.876111664" watchObservedRunningTime="2025-12-03 14:38:09.555829393 +0000 UTC m=+721.882891889" Dec 03 14:38:09.635451 master-0 kubenswrapper[4409]: I1203 14:38:09.635307 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-4dft5" podStartSLOduration=7.317483687 podStartE2EDuration="15.630864067s" podCreationTimestamp="2025-12-03 14:37:54 +0000 UTC" firstStartedPulling="2025-12-03 14:37:59.318234018 +0000 UTC m=+711.645296524" lastFinishedPulling="2025-12-03 14:38:07.631614398 +0000 UTC m=+719.958676904" observedRunningTime="2025-12-03 14:38:09.625895605 +0000 UTC m=+721.952958111" watchObservedRunningTime="2025-12-03 14:38:09.630864067 +0000 UTC m=+721.957926573" Dec 03 14:38:09.895130 master-0 kubenswrapper[4409]: I1203 14:38:09.894779 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7d9bbdcff5-k5wlt" Dec 03 14:38:10.056415 master-0 kubenswrapper[4409]: I1203 14:38:10.056145 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b955f4bd8-p42gg" podStartSLOduration=8.560909426 podStartE2EDuration="16.056110915s" podCreationTimestamp="2025-12-03 14:37:54 +0000 UTC" firstStartedPulling="2025-12-03 14:38:00.136178912 +0000 UTC m=+712.463241418" lastFinishedPulling="2025-12-03 14:38:07.631380401 +0000 UTC m=+719.958442907" observedRunningTime="2025-12-03 14:38:10.047991112 +0000 UTC m=+722.375053618" watchObservedRunningTime="2025-12-03 14:38:10.056110915 +0000 UTC m=+722.383173421" Dec 03 14:38:11.408749 master-0 kubenswrapper[4409]: I1203 14:38:11.408621 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-9qmpd" podStartSLOduration=9.03303654 podStartE2EDuration="22.408561385s" podCreationTimestamp="2025-12-03 14:37:49 +0000 UTC" firstStartedPulling="2025-12-03 14:37:54.225102233 +0000 UTC m=+706.552164739" lastFinishedPulling="2025-12-03 14:38:07.600627078 +0000 UTC m=+719.927689584" observedRunningTime="2025-12-03 14:38:11.394402859 +0000 UTC m=+723.721465395" watchObservedRunningTime="2025-12-03 14:38:11.408561385 +0000 UTC m=+723.735623891" Dec 03 14:38:12.863701 master-0 kubenswrapper[4409]: I1203 14:38:12.863609 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" event={"ID":"f8f07b98-899d-4186-8a76-ac583e3b9ab9","Type":"ContainerStarted","Data":"4def334df1177bc45152e0054572d7175df876f11c631ae1cfa5bd0c0a20b4db"} Dec 03 14:38:12.864571 master-0 kubenswrapper[4409]: I1203 14:38:12.863954 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:38:12.902251 master-0 kubenswrapper[4409]: I1203 14:38:12.902157 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" podStartSLOduration=7.050778879 podStartE2EDuration="18.902136306s" podCreationTimestamp="2025-12-03 14:37:54 +0000 UTC" firstStartedPulling="2025-12-03 14:38:00.164243397 +0000 UTC m=+712.491305903" lastFinishedPulling="2025-12-03 14:38:12.015600823 +0000 UTC m=+724.342663330" observedRunningTime="2025-12-03 14:38:12.897309848 +0000 UTC m=+725.224372364" watchObservedRunningTime="2025-12-03 14:38:12.902136306 +0000 UTC m=+725.229198812" Dec 03 14:38:12.944313 master-0 kubenswrapper[4409]: I1203 14:38:12.944214 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-25nhp" Dec 03 14:38:15.508331 master-0 kubenswrapper[4409]: I1203 14:38:15.508269 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-vxj9b" Dec 03 14:38:16.750849 master-0 kubenswrapper[4409]: I1203 14:38:16.750748 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-kfv5q" Dec 03 14:38:29.542383 master-0 kubenswrapper[4409]: I1203 14:38:29.542257 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-d988cbf4b-f589s" Dec 03 14:38:36.696653 master-0 kubenswrapper[4409]: I1203 14:38:36.696539 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth"] Dec 03 14:38:36.698753 master-0 kubenswrapper[4409]: I1203 14:38:36.698716 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.711195 master-0 kubenswrapper[4409]: I1203 14:38:36.706578 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 03 14:38:36.711195 master-0 kubenswrapper[4409]: I1203 14:38:36.709336 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-g7b5b"] Dec 03 14:38:36.715135 master-0 kubenswrapper[4409]: I1203 14:38:36.715086 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.722485 master-0 kubenswrapper[4409]: I1203 14:38:36.722423 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth"] Dec 03 14:38:36.722911 master-0 kubenswrapper[4409]: I1203 14:38:36.722857 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 03 14:38:36.723111 master-0 kubenswrapper[4409]: I1203 14:38:36.723021 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 03 14:38:36.746447 master-0 kubenswrapper[4409]: I1203 14:38:36.746386 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58667331-42ed-4c03-8e88-38d27c2cc026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.751049 master-0 kubenswrapper[4409]: I1203 14:38:36.750995 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-sockets\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.751421 master-0 kubenswrapper[4409]: I1203 14:38:36.751397 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcff\" (UniqueName: \"kubernetes.io/projected/22e8cc91-45ad-4646-8682-fdf4be50815c-kube-api-access-mqcff\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.751575 master-0 kubenswrapper[4409]: I1203 14:38:36.751560 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.751788 master-0 kubenswrapper[4409]: I1203 14:38:36.751774 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-conf\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.751976 master-0 kubenswrapper[4409]: I1203 14:38:36.751903 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjf6c\" (UniqueName: \"kubernetes.io/projected/58667331-42ed-4c03-8e88-38d27c2cc026-kube-api-access-gjf6c\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.752173 master-0 kubenswrapper[4409]: I1203 14:38:36.752151 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-startup\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.752311 master-0 kubenswrapper[4409]: I1203 14:38:36.752298 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics-certs\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.752903 master-0 kubenswrapper[4409]: I1203 14:38:36.752427 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-reloader\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.816804 master-0 kubenswrapper[4409]: I1203 14:38:36.816309 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4gqc8"] Dec 03 14:38:36.819694 master-0 kubenswrapper[4409]: I1203 14:38:36.819648 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4gqc8" Dec 03 14:38:36.824786 master-0 kubenswrapper[4409]: I1203 14:38:36.824745 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 03 14:38:36.825151 master-0 kubenswrapper[4409]: I1203 14:38:36.825083 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 03 14:38:36.825297 master-0 kubenswrapper[4409]: I1203 14:38:36.825276 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 03 14:38:36.837125 master-0 kubenswrapper[4409]: I1203 14:38:36.835845 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-pf6cm"] Dec 03 14:38:36.838774 master-0 kubenswrapper[4409]: I1203 14:38:36.838258 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:36.841143 master-0 kubenswrapper[4409]: I1203 14:38:36.841087 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854231 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58667331-42ed-4c03-8e88-38d27c2cc026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854342 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-sockets\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854388 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcff\" (UniqueName: \"kubernetes.io/projected/22e8cc91-45ad-4646-8682-fdf4be50815c-kube-api-access-mqcff\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854423 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854444 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-conf\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854467 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjf6c\" (UniqueName: \"kubernetes.io/projected/58667331-42ed-4c03-8e88-38d27c2cc026-kube-api-access-gjf6c\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854496 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-startup\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854521 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics-certs\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854546 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-reloader\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.854966 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-reloader\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.855520 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.856709 master-0 kubenswrapper[4409]: I1203 14:38:36.855564 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-sockets\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.866611 master-0 kubenswrapper[4409]: I1203 14:38:36.866545 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-conf\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.867974 master-0 kubenswrapper[4409]: I1203 14:38:36.867949 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/22e8cc91-45ad-4646-8682-fdf4be50815c-frr-startup\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.868862 master-0 kubenswrapper[4409]: I1203 14:38:36.868566 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pf6cm"] Dec 03 14:38:36.871536 master-0 kubenswrapper[4409]: I1203 14:38:36.871476 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22e8cc91-45ad-4646-8682-fdf4be50815c-metrics-certs\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.873424 master-0 kubenswrapper[4409]: I1203 14:38:36.873362 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58667331-42ed-4c03-8e88-38d27c2cc026-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.889135 master-0 kubenswrapper[4409]: I1203 14:38:36.889073 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjf6c\" (UniqueName: \"kubernetes.io/projected/58667331-42ed-4c03-8e88-38d27c2cc026-kube-api-access-gjf6c\") pod \"frr-k8s-webhook-server-7fcb986d4-4gsth\" (UID: \"58667331-42ed-4c03-8e88-38d27c2cc026\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:36.893190 master-0 kubenswrapper[4409]: I1203 14:38:36.893128 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcff\" (UniqueName: \"kubernetes.io/projected/22e8cc91-45ad-4646-8682-fdf4be50815c-kube-api-access-mqcff\") pod \"frr-k8s-g7b5b\" (UID: \"22e8cc91-45ad-4646-8682-fdf4be50815c\") " pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955571 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955661 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-metrics-certs\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955689 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbbg\" (UniqueName: \"kubernetes.io/projected/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-kube-api-access-pjbbg\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955717 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-cert\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955808 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955851 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6149804a-9b99-49a8-b88e-001b8a572ce9-metallb-excludel2\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:36.955983 master-0 kubenswrapper[4409]: I1203 14:38:36.955923 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7zj\" (UniqueName: \"kubernetes.io/projected/6149804a-9b99-49a8-b88e-001b8a572ce9-kube-api-access-df7zj\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.058407 master-0 kubenswrapper[4409]: I1203 14:38:37.058314 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-metrics-certs\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.058407 master-0 kubenswrapper[4409]: I1203 14:38:37.058391 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbbg\" (UniqueName: \"kubernetes.io/projected/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-kube-api-access-pjbbg\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.058407 master-0 kubenswrapper[4409]: I1203 14:38:37.058431 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-cert\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.058905 master-0 kubenswrapper[4409]: I1203 14:38:37.058513 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.058905 master-0 kubenswrapper[4409]: I1203 14:38:37.058560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6149804a-9b99-49a8-b88e-001b8a572ce9-metallb-excludel2\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.058905 master-0 kubenswrapper[4409]: I1203 14:38:37.058624 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7zj\" (UniqueName: \"kubernetes.io/projected/6149804a-9b99-49a8-b88e-001b8a572ce9-kube-api-access-df7zj\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.058905 master-0 kubenswrapper[4409]: I1203 14:38:37.058676 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.059107 master-0 kubenswrapper[4409]: E1203 14:38:37.058910 4409 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 03 14:38:37.059107 master-0 kubenswrapper[4409]: E1203 14:38:37.059042 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs podName:930fd3d1-d7ce-49f1-a3ea-1dd7493f0955 nodeName:}" failed. No retries permitted until 2025-12-03 14:38:37.558972131 +0000 UTC m=+749.886034647 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs") pod "controller-f8648f98b-pf6cm" (UID: "930fd3d1-d7ce-49f1-a3ea-1dd7493f0955") : secret "controller-certs-secret" not found Dec 03 14:38:37.060127 master-0 kubenswrapper[4409]: E1203 14:38:37.059916 4409 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 14:38:37.060127 master-0 kubenswrapper[4409]: E1203 14:38:37.059976 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist podName:6149804a-9b99-49a8-b88e-001b8a572ce9 nodeName:}" failed. No retries permitted until 2025-12-03 14:38:37.559963349 +0000 UTC m=+749.887025855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist") pod "speaker-4gqc8" (UID: "6149804a-9b99-49a8-b88e-001b8a572ce9") : secret "metallb-memberlist" not found Dec 03 14:38:37.061157 master-0 kubenswrapper[4409]: I1203 14:38:37.061087 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6149804a-9b99-49a8-b88e-001b8a572ce9-metallb-excludel2\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.065564 master-0 kubenswrapper[4409]: I1203 14:38:37.065323 4409 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 03 14:38:37.065915 master-0 kubenswrapper[4409]: I1203 14:38:37.065619 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-metrics-certs\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.075091 master-0 kubenswrapper[4409]: I1203 14:38:37.074969 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:37.075999 master-0 kubenswrapper[4409]: I1203 14:38:37.075955 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-cert\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.087502 master-0 kubenswrapper[4409]: I1203 14:38:37.087422 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7zj\" (UniqueName: \"kubernetes.io/projected/6149804a-9b99-49a8-b88e-001b8a572ce9-kube-api-access-df7zj\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.087842 master-0 kubenswrapper[4409]: I1203 14:38:37.087771 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbbg\" (UniqueName: \"kubernetes.io/projected/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-kube-api-access-pjbbg\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.100530 master-0 kubenswrapper[4409]: I1203 14:38:37.097976 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:37.509837 master-0 kubenswrapper[4409]: I1203 14:38:37.509614 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"2a17f93961759b067afe61100705e0af1db6971cc24e899c70eb9a9b6589f819"} Dec 03 14:38:37.522263 master-0 kubenswrapper[4409]: W1203 14:38:37.522151 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58667331_42ed_4c03_8e88_38d27c2cc026.slice/crio-06f78f5a7760442d0cc51262fd6462f22a616afff1ab689cbdb2639deed3dfd5 WatchSource:0}: Error finding container 06f78f5a7760442d0cc51262fd6462f22a616afff1ab689cbdb2639deed3dfd5: Status 404 returned error can't find the container with id 06f78f5a7760442d0cc51262fd6462f22a616afff1ab689cbdb2639deed3dfd5 Dec 03 14:38:37.523123 master-0 kubenswrapper[4409]: I1203 14:38:37.523074 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth"] Dec 03 14:38:37.571555 master-0 kubenswrapper[4409]: I1203 14:38:37.571465 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:37.571844 master-0 kubenswrapper[4409]: I1203 14:38:37.571611 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.571921 master-0 kubenswrapper[4409]: E1203 14:38:37.571797 4409 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 14:38:37.571973 master-0 kubenswrapper[4409]: E1203 14:38:37.571946 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist podName:6149804a-9b99-49a8-b88e-001b8a572ce9 nodeName:}" failed. No retries permitted until 2025-12-03 14:38:38.571909694 +0000 UTC m=+750.898972370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist") pod "speaker-4gqc8" (UID: "6149804a-9b99-49a8-b88e-001b8a572ce9") : secret "metallb-memberlist" not found Dec 03 14:38:37.576058 master-0 kubenswrapper[4409]: I1203 14:38:37.575328 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/930fd3d1-d7ce-49f1-a3ea-1dd7493f0955-metrics-certs\") pod \"controller-f8648f98b-pf6cm\" (UID: \"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955\") " pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:37.846529 master-0 kubenswrapper[4409]: I1203 14:38:37.846433 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:38.409290 master-0 kubenswrapper[4409]: I1203 14:38:38.408939 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-pf6cm"] Dec 03 14:38:38.545583 master-0 kubenswrapper[4409]: I1203 14:38:38.545463 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" event={"ID":"58667331-42ed-4c03-8e88-38d27c2cc026","Type":"ContainerStarted","Data":"06f78f5a7760442d0cc51262fd6462f22a616afff1ab689cbdb2639deed3dfd5"} Dec 03 14:38:38.548160 master-0 kubenswrapper[4409]: I1203 14:38:38.548117 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pf6cm" event={"ID":"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955","Type":"ContainerStarted","Data":"571810e2b04dd5b3f57dbae9e637180dbd953db8c537c3e4c79d0d60f280357a"} Dec 03 14:38:38.621492 master-0 kubenswrapper[4409]: I1203 14:38:38.621430 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:38.621659 master-0 kubenswrapper[4409]: E1203 14:38:38.621586 4409 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 03 14:38:38.621659 master-0 kubenswrapper[4409]: E1203 14:38:38.621646 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist podName:6149804a-9b99-49a8-b88e-001b8a572ce9 nodeName:}" failed. No retries permitted until 2025-12-03 14:38:40.621626468 +0000 UTC m=+752.948688974 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist") pod "speaker-4gqc8" (UID: "6149804a-9b99-49a8-b88e-001b8a572ce9") : secret "metallb-memberlist" not found Dec 03 14:38:39.070807 master-0 kubenswrapper[4409]: I1203 14:38:39.070710 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5"] Dec 03 14:38:39.074213 master-0 kubenswrapper[4409]: I1203 14:38:39.073159 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" Dec 03 14:38:39.084261 master-0 kubenswrapper[4409]: I1203 14:38:39.084196 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl"] Dec 03 14:38:39.106901 master-0 kubenswrapper[4409]: I1203 14:38:39.106807 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5"] Dec 03 14:38:39.107917 master-0 kubenswrapper[4409]: I1203 14:38:39.107097 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.109272 master-0 kubenswrapper[4409]: I1203 14:38:39.109239 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl"] Dec 03 14:38:39.111462 master-0 kubenswrapper[4409]: I1203 14:38:39.111418 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 03 14:38:39.120518 master-0 kubenswrapper[4409]: I1203 14:38:39.120469 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-6x7jt"] Dec 03 14:38:39.135080 master-0 kubenswrapper[4409]: I1203 14:38:39.134485 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.153652 master-0 kubenswrapper[4409]: I1203 14:38:39.153531 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.153652 master-0 kubenswrapper[4409]: I1203 14:38:39.153636 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/fc7a4702-dbd8-4803-b7b4-5b7e51e42bde-kube-api-access-w6wl2\") pod \"nmstate-metrics-7f946cbc9-jdqp5\" (UID: \"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" Dec 03 14:38:39.153652 master-0 kubenswrapper[4409]: I1203 14:38:39.153664 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg5st\" (UniqueName: \"kubernetes.io/projected/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-kube-api-access-sg5st\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.255841 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-nmstate-lock\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.255929 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-dbus-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.255980 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.256064 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/fc7a4702-dbd8-4803-b7b4-5b7e51e42bde-kube-api-access-w6wl2\") pod \"nmstate-metrics-7f946cbc9-jdqp5\" (UID: \"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.256097 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg5st\" (UniqueName: \"kubernetes.io/projected/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-kube-api-access-sg5st\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.256142 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57vsd\" (UniqueName: \"kubernetes.io/projected/95578784-6632-496e-933a-446221fa7d21-kube-api-access-57vsd\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: I1203 14:38:39.256236 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-ovs-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.256518 master-0 kubenswrapper[4409]: E1203 14:38:39.256532 4409 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 03 14:38:39.257192 master-0 kubenswrapper[4409]: E1203 14:38:39.256621 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair podName:dae33b31-3a7f-4ea1-8076-8dee68fcd78e nodeName:}" failed. No retries permitted until 2025-12-03 14:38:39.756591151 +0000 UTC m=+752.083653657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-7xtkl" (UID: "dae33b31-3a7f-4ea1-8076-8dee68fcd78e") : secret "openshift-nmstate-webhook" not found Dec 03 14:38:39.270774 master-0 kubenswrapper[4409]: I1203 14:38:39.270681 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6"] Dec 03 14:38:39.281203 master-0 kubenswrapper[4409]: I1203 14:38:39.273355 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.292295 master-0 kubenswrapper[4409]: I1203 14:38:39.292059 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6"] Dec 03 14:38:39.292295 master-0 kubenswrapper[4409]: I1203 14:38:39.292267 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 03 14:38:39.292625 master-0 kubenswrapper[4409]: I1203 14:38:39.292586 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/fc7a4702-dbd8-4803-b7b4-5b7e51e42bde-kube-api-access-w6wl2\") pod \"nmstate-metrics-7f946cbc9-jdqp5\" (UID: \"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" Dec 03 14:38:39.292676 master-0 kubenswrapper[4409]: I1203 14:38:39.292649 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 03 14:38:39.296553 master-0 kubenswrapper[4409]: I1203 14:38:39.296504 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg5st\" (UniqueName: \"kubernetes.io/projected/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-kube-api-access-sg5st\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.357875 master-0 kubenswrapper[4409]: I1203 14:38:39.357592 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzph5\" (UniqueName: \"kubernetes.io/projected/7af643d9-084b-4e46-ae72-ae875ec0560d-kube-api-access-kzph5\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.357875 master-0 kubenswrapper[4409]: I1203 14:38:39.357778 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-nmstate-lock\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.357875 master-0 kubenswrapper[4409]: I1203 14:38:39.357806 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-dbus-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.357875 master-0 kubenswrapper[4409]: I1203 14:38:39.357879 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7af643d9-084b-4e46-ae72-ae875ec0560d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.358285 master-0 kubenswrapper[4409]: I1203 14:38:39.357932 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57vsd\" (UniqueName: \"kubernetes.io/projected/95578784-6632-496e-933a-446221fa7d21-kube-api-access-57vsd\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.358285 master-0 kubenswrapper[4409]: I1203 14:38:39.358160 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-nmstate-lock\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.358400 master-0 kubenswrapper[4409]: I1203 14:38:39.358258 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7af643d9-084b-4e46-ae72-ae875ec0560d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.358400 master-0 kubenswrapper[4409]: I1203 14:38:39.358345 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-ovs-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.358504 master-0 kubenswrapper[4409]: I1203 14:38:39.358483 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-ovs-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.359175 master-0 kubenswrapper[4409]: I1203 14:38:39.358918 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/95578784-6632-496e-933a-446221fa7d21-dbus-socket\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.395889 master-0 kubenswrapper[4409]: I1203 14:38:39.394463 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57vsd\" (UniqueName: \"kubernetes.io/projected/95578784-6632-496e-933a-446221fa7d21-kube-api-access-57vsd\") pod \"nmstate-handler-6x7jt\" (UID: \"95578784-6632-496e-933a-446221fa7d21\") " pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.419958 master-0 kubenswrapper[4409]: I1203 14:38:39.419705 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" Dec 03 14:38:39.462719 master-0 kubenswrapper[4409]: I1203 14:38:39.461978 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7af643d9-084b-4e46-ae72-ae875ec0560d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.462719 master-0 kubenswrapper[4409]: I1203 14:38:39.462167 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7af643d9-084b-4e46-ae72-ae875ec0560d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.462719 master-0 kubenswrapper[4409]: I1203 14:38:39.462375 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzph5\" (UniqueName: \"kubernetes.io/projected/7af643d9-084b-4e46-ae72-ae875ec0560d-kube-api-access-kzph5\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.463946 master-0 kubenswrapper[4409]: I1203 14:38:39.463911 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7af643d9-084b-4e46-ae72-ae875ec0560d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.467888 master-0 kubenswrapper[4409]: I1203 14:38:39.467856 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7af643d9-084b-4e46-ae72-ae875ec0560d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.501118 master-0 kubenswrapper[4409]: I1203 14:38:39.500472 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzph5\" (UniqueName: \"kubernetes.io/projected/7af643d9-084b-4e46-ae72-ae875ec0560d-kube-api-access-kzph5\") pod \"nmstate-console-plugin-7fbb5f6569-nrqx6\" (UID: \"7af643d9-084b-4e46-ae72-ae875ec0560d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.506186 master-0 kubenswrapper[4409]: I1203 14:38:39.506019 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:39.520332 master-0 kubenswrapper[4409]: I1203 14:38:39.520235 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cc76c75d9-9mh28"] Dec 03 14:38:39.522823 master-0 kubenswrapper[4409]: I1203 14:38:39.522785 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.570906 master-0 kubenswrapper[4409]: I1203 14:38:39.570715 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.570906 master-0 kubenswrapper[4409]: I1203 14:38:39.570836 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-oauth-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.571080 master-0 kubenswrapper[4409]: I1203 14:38:39.570994 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5crp\" (UniqueName: \"kubernetes.io/projected/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-kube-api-access-j5crp\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.571080 master-0 kubenswrapper[4409]: I1203 14:38:39.571064 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-oauth-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.571182 master-0 kubenswrapper[4409]: I1203 14:38:39.571145 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-service-ca\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.571469 master-0 kubenswrapper[4409]: I1203 14:38:39.571418 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.571561 master-0 kubenswrapper[4409]: I1203 14:38:39.571539 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-trusted-ca-bundle\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.577621 master-0 kubenswrapper[4409]: I1203 14:38:39.574539 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cc76c75d9-9mh28"] Dec 03 14:38:39.589702 master-0 kubenswrapper[4409]: I1203 14:38:39.589384 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6x7jt" event={"ID":"95578784-6632-496e-933a-446221fa7d21","Type":"ContainerStarted","Data":"a82fc36647b7a2589e4fdc15e1269036a178c40674b986d3c191ee1da1ed0616"} Dec 03 14:38:39.599796 master-0 kubenswrapper[4409]: I1203 14:38:39.597076 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pf6cm" event={"ID":"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955","Type":"ContainerStarted","Data":"eb159156bdef95c5e3dc53badc996491c3d7a4e94e941e370dcc7573d3115b64"} Dec 03 14:38:39.671839 master-0 kubenswrapper[4409]: I1203 14:38:39.671367 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673686 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-trusted-ca-bundle\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673815 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673843 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-oauth-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673881 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5crp\" (UniqueName: \"kubernetes.io/projected/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-kube-api-access-j5crp\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673901 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-oauth-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673926 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-service-ca\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.674545 master-0 kubenswrapper[4409]: I1203 14:38:39.673996 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.676936 master-0 kubenswrapper[4409]: I1203 14:38:39.676110 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-service-ca\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.676936 master-0 kubenswrapper[4409]: I1203 14:38:39.676684 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-oauth-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.679219 master-0 kubenswrapper[4409]: I1203 14:38:39.678826 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.685236 master-0 kubenswrapper[4409]: I1203 14:38:39.681407 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-trusted-ca-bundle\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.686686 master-0 kubenswrapper[4409]: I1203 14:38:39.686627 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-oauth-config\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.686791 master-0 kubenswrapper[4409]: I1203 14:38:39.686590 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-console-serving-cert\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.776278 master-0 kubenswrapper[4409]: I1203 14:38:39.776181 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.780470 master-0 kubenswrapper[4409]: I1203 14:38:39.780367 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dae33b31-3a7f-4ea1-8076-8dee68fcd78e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-7xtkl\" (UID: \"dae33b31-3a7f-4ea1-8076-8dee68fcd78e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:39.857159 master-0 kubenswrapper[4409]: I1203 14:38:39.856662 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5crp\" (UniqueName: \"kubernetes.io/projected/9591e763-e1e1-4d8b-98a6-30d0cad1a78c-kube-api-access-j5crp\") pod \"console-5cc76c75d9-9mh28\" (UID: \"9591e763-e1e1-4d8b-98a6-30d0cad1a78c\") " pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:39.907662 master-0 kubenswrapper[4409]: I1203 14:38:39.907599 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:40.052574 master-0 kubenswrapper[4409]: I1203 14:38:40.051919 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:40.056869 master-0 kubenswrapper[4409]: I1203 14:38:40.056801 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5"] Dec 03 14:38:40.081397 master-0 kubenswrapper[4409]: W1203 14:38:40.081289 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc7a4702_dbd8_4803_b7b4_5b7e51e42bde.slice/crio-9e2ccef2a73cac1e74d94f04a9ae999a81a833b1ebbedf4c6beaf7ae500e4ae4 WatchSource:0}: Error finding container 9e2ccef2a73cac1e74d94f04a9ae999a81a833b1ebbedf4c6beaf7ae500e4ae4: Status 404 returned error can't find the container with id 9e2ccef2a73cac1e74d94f04a9ae999a81a833b1ebbedf4c6beaf7ae500e4ae4 Dec 03 14:38:40.195702 master-0 kubenswrapper[4409]: I1203 14:38:40.195638 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6"] Dec 03 14:38:40.480037 master-0 kubenswrapper[4409]: I1203 14:38:40.479948 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cc76c75d9-9mh28"] Dec 03 14:38:40.618782 master-0 kubenswrapper[4409]: I1203 14:38:40.618682 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" event={"ID":"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde","Type":"ContainerStarted","Data":"9e2ccef2a73cac1e74d94f04a9ae999a81a833b1ebbedf4c6beaf7ae500e4ae4"} Dec 03 14:38:40.621850 master-0 kubenswrapper[4409]: I1203 14:38:40.621782 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cc76c75d9-9mh28" event={"ID":"9591e763-e1e1-4d8b-98a6-30d0cad1a78c","Type":"ContainerStarted","Data":"9b11541854353130ad6c8007f113ddd2b2e3fc43c39788605c9e20ba2d688eaa"} Dec 03 14:38:40.624127 master-0 kubenswrapper[4409]: I1203 14:38:40.624043 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" event={"ID":"7af643d9-084b-4e46-ae72-ae875ec0560d","Type":"ContainerStarted","Data":"3beaf4da99c236f6727426374b31d381f711a5ac5d054e3f9cc2667eb6c748f6"} Dec 03 14:38:40.626994 master-0 kubenswrapper[4409]: I1203 14:38:40.626950 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl"] Dec 03 14:38:40.706043 master-0 kubenswrapper[4409]: I1203 14:38:40.703564 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:40.709204 master-0 kubenswrapper[4409]: I1203 14:38:40.708305 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6149804a-9b99-49a8-b88e-001b8a572ce9-memberlist\") pod \"speaker-4gqc8\" (UID: \"6149804a-9b99-49a8-b88e-001b8a572ce9\") " pod="metallb-system/speaker-4gqc8" Dec 03 14:38:40.767731 master-0 kubenswrapper[4409]: I1203 14:38:40.767533 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4gqc8" Dec 03 14:38:41.096689 master-0 kubenswrapper[4409]: W1203 14:38:41.096605 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae33b31_3a7f_4ea1_8076_8dee68fcd78e.slice/crio-f54cb4f710d9e6a6f6608a61303ded5d03878846cbd002ef9bfd8e1ce7101104 WatchSource:0}: Error finding container f54cb4f710d9e6a6f6608a61303ded5d03878846cbd002ef9bfd8e1ce7101104: Status 404 returned error can't find the container with id f54cb4f710d9e6a6f6608a61303ded5d03878846cbd002ef9bfd8e1ce7101104 Dec 03 14:38:41.649216 master-0 kubenswrapper[4409]: I1203 14:38:41.649029 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" event={"ID":"dae33b31-3a7f-4ea1-8076-8dee68fcd78e","Type":"ContainerStarted","Data":"f54cb4f710d9e6a6f6608a61303ded5d03878846cbd002ef9bfd8e1ce7101104"} Dec 03 14:38:41.652610 master-0 kubenswrapper[4409]: I1203 14:38:41.652519 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cc76c75d9-9mh28" event={"ID":"9591e763-e1e1-4d8b-98a6-30d0cad1a78c","Type":"ContainerStarted","Data":"39fba79106f1f807026ac5ece4d8a84a490cae3c234c9d72b36fefca6ea5b67c"} Dec 03 14:38:41.658143 master-0 kubenswrapper[4409]: I1203 14:38:41.657868 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-pf6cm" event={"ID":"930fd3d1-d7ce-49f1-a3ea-1dd7493f0955","Type":"ContainerStarted","Data":"a1bbe360bdb709e14379e15996a26236f4c94c97096f599bef3cbe44b2743e4c"} Dec 03 14:38:41.658143 master-0 kubenswrapper[4409]: I1203 14:38:41.658034 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:38:41.660911 master-0 kubenswrapper[4409]: I1203 14:38:41.660865 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4gqc8" event={"ID":"6149804a-9b99-49a8-b88e-001b8a572ce9","Type":"ContainerStarted","Data":"56a391489a25f12b34a491ab7ac2416dc26dd555890a29d2cc3acf4994023dfd"} Dec 03 14:38:41.660989 master-0 kubenswrapper[4409]: I1203 14:38:41.660939 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4gqc8" event={"ID":"6149804a-9b99-49a8-b88e-001b8a572ce9","Type":"ContainerStarted","Data":"544a3357ffbdf73336c8cfa92b79b4f4223769c6f1783d80c80044f6d6c4873f"} Dec 03 14:38:41.683534 master-0 kubenswrapper[4409]: I1203 14:38:41.683118 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cc76c75d9-9mh28" podStartSLOduration=2.683089689 podStartE2EDuration="2.683089689s" podCreationTimestamp="2025-12-03 14:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:38:41.680936587 +0000 UTC m=+754.007999103" watchObservedRunningTime="2025-12-03 14:38:41.683089689 +0000 UTC m=+754.010152195" Dec 03 14:38:41.761019 master-0 kubenswrapper[4409]: I1203 14:38:41.760824 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-pf6cm" podStartSLOduration=3.194099894 podStartE2EDuration="5.759708007s" podCreationTimestamp="2025-12-03 14:38:36 +0000 UTC" firstStartedPulling="2025-12-03 14:38:38.620224448 +0000 UTC m=+750.947286954" lastFinishedPulling="2025-12-03 14:38:41.185832561 +0000 UTC m=+753.512895067" observedRunningTime="2025-12-03 14:38:41.750188546 +0000 UTC m=+754.077251072" watchObservedRunningTime="2025-12-03 14:38:41.759708007 +0000 UTC m=+754.086770513" Dec 03 14:38:46.719839 master-0 kubenswrapper[4409]: I1203 14:38:46.719766 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4gqc8" event={"ID":"6149804a-9b99-49a8-b88e-001b8a572ce9","Type":"ContainerStarted","Data":"8c2af3dd1d6a28f7f7824c6efc58911ce591667c0e2cc0a4a512cf9a71e97196"} Dec 03 14:38:46.720578 master-0 kubenswrapper[4409]: I1203 14:38:46.719965 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4gqc8" Dec 03 14:38:46.721835 master-0 kubenswrapper[4409]: I1203 14:38:46.721714 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" event={"ID":"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde","Type":"ContainerStarted","Data":"3e0a2c0f5166377995afd5c3002bfa529e0f7b340ea097c8cbc11b4d2150076d"} Dec 03 14:38:46.721835 master-0 kubenswrapper[4409]: I1203 14:38:46.721751 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" event={"ID":"fc7a4702-dbd8-4803-b7b4-5b7e51e42bde","Type":"ContainerStarted","Data":"ede788d39906b52c060fe905f0ef4c8ac2cb0be9ea7ed4665d07aa17c0cd7cbc"} Dec 03 14:38:46.724825 master-0 kubenswrapper[4409]: I1203 14:38:46.724760 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" event={"ID":"58667331-42ed-4c03-8e88-38d27c2cc026","Type":"ContainerStarted","Data":"18efe838452840d4e5063e7f679db5aad9742f8c9ec994ad4c77965321111448"} Dec 03 14:38:46.724981 master-0 kubenswrapper[4409]: I1203 14:38:46.724951 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:46.726985 master-0 kubenswrapper[4409]: I1203 14:38:46.726938 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-6x7jt" event={"ID":"95578784-6632-496e-933a-446221fa7d21","Type":"ContainerStarted","Data":"e7ec7136829b86330c9e0723a33fedc2e395e7a523e032c86fd75537586074a2"} Dec 03 14:38:46.727098 master-0 kubenswrapper[4409]: I1203 14:38:46.727067 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:46.729014 master-0 kubenswrapper[4409]: I1203 14:38:46.728956 4409 generic.go:334] "Generic (PLEG): container finished" podID="22e8cc91-45ad-4646-8682-fdf4be50815c" containerID="45c092fec850ea5c0996ff55afe4c16a028eea7b3ab3fe394d85e169b141d15c" exitCode=0 Dec 03 14:38:46.729078 master-0 kubenswrapper[4409]: I1203 14:38:46.728992 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerDied","Data":"45c092fec850ea5c0996ff55afe4c16a028eea7b3ab3fe394d85e169b141d15c"} Dec 03 14:38:46.731607 master-0 kubenswrapper[4409]: I1203 14:38:46.730753 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" event={"ID":"dae33b31-3a7f-4ea1-8076-8dee68fcd78e","Type":"ContainerStarted","Data":"9ca07a92a9af691cb3a1dad520b1beb1425593e4a7c99237436b98a40b4b4251"} Dec 03 14:38:46.731607 master-0 kubenswrapper[4409]: I1203 14:38:46.731345 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:38:46.748657 master-0 kubenswrapper[4409]: I1203 14:38:46.748575 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" event={"ID":"7af643d9-084b-4e46-ae72-ae875ec0560d","Type":"ContainerStarted","Data":"e123cb7fac85c063973a30e38139101900813c0c44ad1879289d3655d3fcfd1c"} Dec 03 14:38:46.749999 master-0 kubenswrapper[4409]: I1203 14:38:46.749763 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4gqc8" podStartSLOduration=10.749727765 podStartE2EDuration="10.749727765s" podCreationTimestamp="2025-12-03 14:38:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:38:46.746693509 +0000 UTC m=+759.073756025" watchObservedRunningTime="2025-12-03 14:38:46.749727765 +0000 UTC m=+759.076790291" Dec 03 14:38:46.769950 master-0 kubenswrapper[4409]: I1203 14:38:46.769308 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" podStartSLOduration=3.505633793 podStartE2EDuration="7.769276941s" podCreationTimestamp="2025-12-03 14:38:39 +0000 UTC" firstStartedPulling="2025-12-03 14:38:41.157502435 +0000 UTC m=+753.484564941" lastFinishedPulling="2025-12-03 14:38:45.421145583 +0000 UTC m=+757.748208089" observedRunningTime="2025-12-03 14:38:46.766364348 +0000 UTC m=+759.093426864" watchObservedRunningTime="2025-12-03 14:38:46.769276941 +0000 UTC m=+759.096339447" Dec 03 14:38:46.792668 master-0 kubenswrapper[4409]: I1203 14:38:46.792547 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jdqp5" podStartSLOduration=2.500023913 podStartE2EDuration="7.792512622s" podCreationTimestamp="2025-12-03 14:38:39 +0000 UTC" firstStartedPulling="2025-12-03 14:38:40.096750487 +0000 UTC m=+752.423812993" lastFinishedPulling="2025-12-03 14:38:45.389239186 +0000 UTC m=+757.716301702" observedRunningTime="2025-12-03 14:38:46.790490844 +0000 UTC m=+759.117553360" watchObservedRunningTime="2025-12-03 14:38:46.792512622 +0000 UTC m=+759.119575138" Dec 03 14:38:46.856157 master-0 kubenswrapper[4409]: I1203 14:38:46.833148 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" podStartSLOduration=2.940204375 podStartE2EDuration="10.833113496s" podCreationTimestamp="2025-12-03 14:38:36 +0000 UTC" firstStartedPulling="2025-12-03 14:38:37.528244393 +0000 UTC m=+749.855306899" lastFinishedPulling="2025-12-03 14:38:45.421153514 +0000 UTC m=+757.748216020" observedRunningTime="2025-12-03 14:38:46.827944479 +0000 UTC m=+759.155006985" watchObservedRunningTime="2025-12-03 14:38:46.833113496 +0000 UTC m=+759.160176002" Dec 03 14:38:46.858881 master-0 kubenswrapper[4409]: I1203 14:38:46.858787 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-6x7jt" podStartSLOduration=2.005103822 podStartE2EDuration="7.858753505s" podCreationTimestamp="2025-12-03 14:38:39 +0000 UTC" firstStartedPulling="2025-12-03 14:38:39.567988554 +0000 UTC m=+751.895051080" lastFinishedPulling="2025-12-03 14:38:45.421638257 +0000 UTC m=+757.748700763" observedRunningTime="2025-12-03 14:38:46.848655368 +0000 UTC m=+759.175717874" watchObservedRunningTime="2025-12-03 14:38:46.858753505 +0000 UTC m=+759.185816011" Dec 03 14:38:46.897445 master-0 kubenswrapper[4409]: I1203 14:38:46.897362 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-nrqx6" podStartSLOduration=2.70359243 podStartE2EDuration="7.897337942s" podCreationTimestamp="2025-12-03 14:38:39 +0000 UTC" firstStartedPulling="2025-12-03 14:38:40.223069908 +0000 UTC m=+752.550132434" lastFinishedPulling="2025-12-03 14:38:45.41681544 +0000 UTC m=+757.743877946" observedRunningTime="2025-12-03 14:38:46.895475789 +0000 UTC m=+759.222538295" watchObservedRunningTime="2025-12-03 14:38:46.897337942 +0000 UTC m=+759.224400448" Dec 03 14:38:47.761545 master-0 kubenswrapper[4409]: I1203 14:38:47.761444 4409 generic.go:334] "Generic (PLEG): container finished" podID="22e8cc91-45ad-4646-8682-fdf4be50815c" containerID="1cffa7b4c6a9a0ea30ec4e31296855439863fbf6e226a68855afad3d5553607e" exitCode=0 Dec 03 14:38:47.761545 master-0 kubenswrapper[4409]: I1203 14:38:47.761526 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerDied","Data":"1cffa7b4c6a9a0ea30ec4e31296855439863fbf6e226a68855afad3d5553607e"} Dec 03 14:38:48.775700 master-0 kubenswrapper[4409]: I1203 14:38:48.775595 4409 generic.go:334] "Generic (PLEG): container finished" podID="22e8cc91-45ad-4646-8682-fdf4be50815c" containerID="78c907d35379a985bccbda67dcccd86cf95c1ed39d10e7658647b87759b5d859" exitCode=0 Dec 03 14:38:48.775700 master-0 kubenswrapper[4409]: I1203 14:38:48.775669 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerDied","Data":"78c907d35379a985bccbda67dcccd86cf95c1ed39d10e7658647b87759b5d859"} Dec 03 14:38:49.793275 master-0 kubenswrapper[4409]: I1203 14:38:49.793188 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"88a5fd4647a147352a0f5b4ac0e0c9147dc268f9f1c5d2bebfe064a5ece70286"} Dec 03 14:38:49.793275 master-0 kubenswrapper[4409]: I1203 14:38:49.793267 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"9f46580e0eaa016b4544db3222beadf3601d8150ad479896019ad4554943f35a"} Dec 03 14:38:49.793275 master-0 kubenswrapper[4409]: I1203 14:38:49.793283 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"0d8c316b3152d99c2dce662cbdb51a7c14285cbe97d1a5b2bfb31ada1f97114a"} Dec 03 14:38:49.793275 master-0 kubenswrapper[4409]: I1203 14:38:49.793295 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"2b29b6ccc6c85e641993a65078c9004fb7c3f667f4ac14913afd56ceed94b11e"} Dec 03 14:38:49.794787 master-0 kubenswrapper[4409]: I1203 14:38:49.793309 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"cac7480d176a1b86c8dd6f8a31ab2f841500da930ffbcdb793addcb7e9686cc2"} Dec 03 14:38:49.909311 master-0 kubenswrapper[4409]: I1203 14:38:49.909254 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:49.909429 master-0 kubenswrapper[4409]: I1203 14:38:49.909324 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:49.914781 master-0 kubenswrapper[4409]: I1203 14:38:49.914741 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:50.810359 master-0 kubenswrapper[4409]: I1203 14:38:50.810267 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-g7b5b" event={"ID":"22e8cc91-45ad-4646-8682-fdf4be50815c","Type":"ContainerStarted","Data":"044f1537cd2fc8ca665ac403dc2c9865c3a6d1f69f3e78e42ca3981005c86514"} Dec 03 14:38:50.811074 master-0 kubenswrapper[4409]: I1203 14:38:50.810861 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:50.815124 master-0 kubenswrapper[4409]: I1203 14:38:50.815088 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cc76c75d9-9mh28" Dec 03 14:38:50.850373 master-0 kubenswrapper[4409]: I1203 14:38:50.850187 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-g7b5b" podStartSLOduration=6.776127083 podStartE2EDuration="14.850137853s" podCreationTimestamp="2025-12-03 14:38:36 +0000 UTC" firstStartedPulling="2025-12-03 14:38:37.348168583 +0000 UTC m=+749.675231129" lastFinishedPulling="2025-12-03 14:38:45.422179393 +0000 UTC m=+757.749241899" observedRunningTime="2025-12-03 14:38:50.841673622 +0000 UTC m=+763.168736148" watchObservedRunningTime="2025-12-03 14:38:50.850137853 +0000 UTC m=+763.177200379" Dec 03 14:38:50.942778 master-0 kubenswrapper[4409]: I1203 14:38:50.942676 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:38:52.099021 master-0 kubenswrapper[4409]: I1203 14:38:52.098926 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:52.138071 master-0 kubenswrapper[4409]: I1203 14:38:52.137881 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:38:54.532232 master-0 kubenswrapper[4409]: I1203 14:38:54.532154 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-6x7jt" Dec 03 14:38:57.081379 master-0 kubenswrapper[4409]: I1203 14:38:57.081315 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-4gsth" Dec 03 14:38:57.850933 master-0 kubenswrapper[4409]: I1203 14:38:57.850850 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-pf6cm" Dec 03 14:39:00.061721 master-0 kubenswrapper[4409]: I1203 14:39:00.061626 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-7xtkl" Dec 03 14:39:00.772195 master-0 kubenswrapper[4409]: I1203 14:39:00.772087 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4gqc8" Dec 03 14:39:07.101567 master-0 kubenswrapper[4409]: I1203 14:39:07.101501 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-g7b5b" Dec 03 14:39:08.531552 master-0 kubenswrapper[4409]: I1203 14:39:08.531468 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-pq5qt"] Dec 03 14:39:08.533102 master-0 kubenswrapper[4409]: I1203 14:39:08.533071 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.536587 master-0 kubenswrapper[4409]: I1203 14:39:08.536549 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Dec 03 14:39:08.576173 master-0 kubenswrapper[4409]: I1203 14:39:08.576072 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-pq5qt"] Dec 03 14:39:08.651252 master-0 kubenswrapper[4409]: I1203 14:39:08.651166 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-sys\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651286 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-registration-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651324 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-node-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651406 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/4700133d-8aa7-4c2e-95a2-20732b3830ad-metrics-cert\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651452 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-pod-volumes-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651537 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-run-udev\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.651655 master-0 kubenswrapper[4409]: I1203 14:39:08.651597 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6h6j\" (UniqueName: \"kubernetes.io/projected/4700133d-8aa7-4c2e-95a2-20732b3830ad-kube-api-access-v6h6j\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.652226 master-0 kubenswrapper[4409]: I1203 14:39:08.652187 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-file-lock-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.652317 master-0 kubenswrapper[4409]: I1203 14:39:08.652225 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-lvmd-config\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.652317 master-0 kubenswrapper[4409]: I1203 14:39:08.652264 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-device-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.652419 master-0 kubenswrapper[4409]: I1203 14:39:08.652370 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-csi-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.753869 master-0 kubenswrapper[4409]: I1203 14:39:08.753748 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-registration-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.753869 master-0 kubenswrapper[4409]: I1203 14:39:08.753865 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-node-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.753908 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/4700133d-8aa7-4c2e-95a2-20732b3830ad-metrics-cert\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.753936 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-pod-volumes-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.753971 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-run-udev\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754027 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6h6j\" (UniqueName: \"kubernetes.io/projected/4700133d-8aa7-4c2e-95a2-20732b3830ad-kube-api-access-v6h6j\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754052 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-file-lock-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754068 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-lvmd-config\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754088 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-device-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754102 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-csi-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754130 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-sys\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754270 master-0 kubenswrapper[4409]: I1203 14:39:08.754227 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-sys\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754752 master-0 kubenswrapper[4409]: I1203 14:39:08.754490 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-registration-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.754752 master-0 kubenswrapper[4409]: I1203 14:39:08.754683 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-node-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.756580 master-0 kubenswrapper[4409]: I1203 14:39:08.756533 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-file-lock-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.756723 master-0 kubenswrapper[4409]: I1203 14:39:08.756702 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-pod-volumes-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.756805 master-0 kubenswrapper[4409]: I1203 14:39:08.756750 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-run-udev\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.756847 master-0 kubenswrapper[4409]: I1203 14:39:08.756803 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-device-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.757024 master-0 kubenswrapper[4409]: I1203 14:39:08.756986 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-lvmd-config\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.757229 master-0 kubenswrapper[4409]: I1203 14:39:08.757181 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/4700133d-8aa7-4c2e-95a2-20732b3830ad-csi-plugin-dir\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.758218 master-0 kubenswrapper[4409]: I1203 14:39:08.758169 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/4700133d-8aa7-4c2e-95a2-20732b3830ad-metrics-cert\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.779125 master-0 kubenswrapper[4409]: I1203 14:39:08.779069 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6h6j\" (UniqueName: \"kubernetes.io/projected/4700133d-8aa7-4c2e-95a2-20732b3830ad-kube-api-access-v6h6j\") pod \"vg-manager-pq5qt\" (UID: \"4700133d-8aa7-4c2e-95a2-20732b3830ad\") " pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:08.856529 master-0 kubenswrapper[4409]: I1203 14:39:08.856418 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:09.182999 master-0 kubenswrapper[4409]: I1203 14:39:09.182894 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-pq5qt"] Dec 03 14:39:10.007420 master-0 kubenswrapper[4409]: I1203 14:39:10.007329 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-pq5qt" event={"ID":"4700133d-8aa7-4c2e-95a2-20732b3830ad","Type":"ContainerStarted","Data":"474fb2b4f77d4c1f7ed99c5679095f15bb5b9d456222279441dddeb86b76dd3b"} Dec 03 14:39:10.007420 master-0 kubenswrapper[4409]: I1203 14:39:10.007412 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-pq5qt" event={"ID":"4700133d-8aa7-4c2e-95a2-20732b3830ad","Type":"ContainerStarted","Data":"63dd71990077a651a73747229c144563415a803b8f07511a47f8058a636b0f62"} Dec 03 14:39:10.043714 master-0 kubenswrapper[4409]: I1203 14:39:10.043576 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-pq5qt" podStartSLOduration=2.043541444 podStartE2EDuration="2.043541444s" podCreationTimestamp="2025-12-03 14:39:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:39:10.037380479 +0000 UTC m=+782.364443045" watchObservedRunningTime="2025-12-03 14:39:10.043541444 +0000 UTC m=+782.370603940" Dec 03 14:39:12.028775 master-0 kubenswrapper[4409]: I1203 14:39:12.028740 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-pq5qt_4700133d-8aa7-4c2e-95a2-20732b3830ad/vg-manager/0.log" Dec 03 14:39:12.029538 master-0 kubenswrapper[4409]: I1203 14:39:12.029511 4409 generic.go:334] "Generic (PLEG): container finished" podID="4700133d-8aa7-4c2e-95a2-20732b3830ad" containerID="474fb2b4f77d4c1f7ed99c5679095f15bb5b9d456222279441dddeb86b76dd3b" exitCode=1 Dec 03 14:39:12.029679 master-0 kubenswrapper[4409]: I1203 14:39:12.029629 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-pq5qt" event={"ID":"4700133d-8aa7-4c2e-95a2-20732b3830ad","Type":"ContainerDied","Data":"474fb2b4f77d4c1f7ed99c5679095f15bb5b9d456222279441dddeb86b76dd3b"} Dec 03 14:39:12.030514 master-0 kubenswrapper[4409]: I1203 14:39:12.030485 4409 scope.go:117] "RemoveContainer" containerID="474fb2b4f77d4c1f7ed99c5679095f15bb5b9d456222279441dddeb86b76dd3b" Dec 03 14:39:12.461546 master-0 kubenswrapper[4409]: I1203 14:39:12.460835 4409 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Dec 03 14:39:12.617313 master-0 kubenswrapper[4409]: I1203 14:39:12.617041 4409 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2025-12-03T14:39:12.46088855Z","Handler":null,"Name":""} Dec 03 14:39:12.620738 master-0 kubenswrapper[4409]: I1203 14:39:12.620712 4409 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Dec 03 14:39:12.620889 master-0 kubenswrapper[4409]: I1203 14:39:12.620782 4409 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Dec 03 14:39:13.045924 master-0 kubenswrapper[4409]: I1203 14:39:13.045756 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-pq5qt_4700133d-8aa7-4c2e-95a2-20732b3830ad/vg-manager/0.log" Dec 03 14:39:13.045924 master-0 kubenswrapper[4409]: I1203 14:39:13.045825 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-pq5qt" event={"ID":"4700133d-8aa7-4c2e-95a2-20732b3830ad","Type":"ContainerStarted","Data":"3fbaf3f07c30d33cb165a7dfce6cd1a948791a8d967a8391889f9ebe91039408"} Dec 03 14:39:15.166027 master-0 kubenswrapper[4409]: I1203 14:39:15.163892 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zjrm9"] Dec 03 14:39:15.174027 master-0 kubenswrapper[4409]: I1203 14:39:15.170197 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:15.176575 master-0 kubenswrapper[4409]: I1203 14:39:15.176524 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 03 14:39:15.176708 master-0 kubenswrapper[4409]: I1203 14:39:15.176591 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 03 14:39:15.192015 master-0 kubenswrapper[4409]: I1203 14:39:15.191946 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrq4\" (UniqueName: \"kubernetes.io/projected/48c9f27f-e3f6-42bc-9160-ada125a1c1db-kube-api-access-wwrq4\") pod \"openstack-operator-index-zjrm9\" (UID: \"48c9f27f-e3f6-42bc-9160-ada125a1c1db\") " pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:15.195925 master-0 kubenswrapper[4409]: I1203 14:39:15.192447 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zjrm9"] Dec 03 14:39:15.293317 master-0 kubenswrapper[4409]: I1203 14:39:15.293245 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwrq4\" (UniqueName: \"kubernetes.io/projected/48c9f27f-e3f6-42bc-9160-ada125a1c1db-kube-api-access-wwrq4\") pod \"openstack-operator-index-zjrm9\" (UID: \"48c9f27f-e3f6-42bc-9160-ada125a1c1db\") " pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:15.315836 master-0 kubenswrapper[4409]: I1203 14:39:15.315783 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwrq4\" (UniqueName: \"kubernetes.io/projected/48c9f27f-e3f6-42bc-9160-ada125a1c1db-kube-api-access-wwrq4\") pod \"openstack-operator-index-zjrm9\" (UID: \"48c9f27f-e3f6-42bc-9160-ada125a1c1db\") " pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:15.502373 master-0 kubenswrapper[4409]: I1203 14:39:15.502226 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:15.914944 master-0 kubenswrapper[4409]: I1203 14:39:15.914882 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zjrm9"] Dec 03 14:39:15.987984 master-0 kubenswrapper[4409]: I1203 14:39:15.987895 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f689c85c4-fv97m" podUID="50223b50-44db-4dad-95c9-fcd93aad3c7c" containerName="console" containerID="cri-o://245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a" gracePeriod=15 Dec 03 14:39:16.089503 master-0 kubenswrapper[4409]: I1203 14:39:16.089444 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zjrm9" event={"ID":"48c9f27f-e3f6-42bc-9160-ada125a1c1db","Type":"ContainerStarted","Data":"ae58953ec4f6b88cd69417575623ebad9d2fc54678e2035ebe93b84556bee46f"} Dec 03 14:39:16.508207 master-0 kubenswrapper[4409]: I1203 14:39:16.508073 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f689c85c4-fv97m_50223b50-44db-4dad-95c9-fcd93aad3c7c/console/0.log" Dec 03 14:39:16.508860 master-0 kubenswrapper[4409]: I1203 14:39:16.508183 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525127 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525199 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525282 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525300 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525346 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8m4s\" (UniqueName: \"kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525381 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525408 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle\") pod \"50223b50-44db-4dad-95c9-fcd93aad3c7c\" (UID: \"50223b50-44db-4dad-95c9-fcd93aad3c7c\") " Dec 03 14:39:16.526084 master-0 kubenswrapper[4409]: I1203 14:39:16.525932 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:39:16.526858 master-0 kubenswrapper[4409]: I1203 14:39:16.526646 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:39:16.526858 master-0 kubenswrapper[4409]: I1203 14:39:16.526790 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:39:16.528561 master-0 kubenswrapper[4409]: I1203 14:39:16.527122 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config" (OuterVolumeSpecName: "console-config") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:39:16.538531 master-0 kubenswrapper[4409]: I1203 14:39:16.538453 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:39:16.538675 master-0 kubenswrapper[4409]: I1203 14:39:16.538547 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s" (OuterVolumeSpecName: "kube-api-access-n8m4s") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "kube-api-access-n8m4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:39:16.538675 master-0 kubenswrapper[4409]: I1203 14:39:16.538613 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "50223b50-44db-4dad-95c9-fcd93aad3c7c" (UID: "50223b50-44db-4dad-95c9-fcd93aad3c7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627346 4409 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627468 4409 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627481 4409 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627492 4409 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-service-ca\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627504 4409 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627515 4409 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50223b50-44db-4dad-95c9-fcd93aad3c7c-console-config\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:16.627607 master-0 kubenswrapper[4409]: I1203 14:39:16.627524 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8m4s\" (UniqueName: \"kubernetes.io/projected/50223b50-44db-4dad-95c9-fcd93aad3c7c-kube-api-access-n8m4s\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:17.100321 master-0 kubenswrapper[4409]: I1203 14:39:17.100266 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f689c85c4-fv97m_50223b50-44db-4dad-95c9-fcd93aad3c7c/console/0.log" Dec 03 14:39:17.100617 master-0 kubenswrapper[4409]: I1203 14:39:17.100330 4409 generic.go:334] "Generic (PLEG): container finished" podID="50223b50-44db-4dad-95c9-fcd93aad3c7c" containerID="245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a" exitCode=2 Dec 03 14:39:17.100617 master-0 kubenswrapper[4409]: I1203 14:39:17.100372 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f689c85c4-fv97m" event={"ID":"50223b50-44db-4dad-95c9-fcd93aad3c7c","Type":"ContainerDied","Data":"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a"} Dec 03 14:39:17.100617 master-0 kubenswrapper[4409]: I1203 14:39:17.100406 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f689c85c4-fv97m" event={"ID":"50223b50-44db-4dad-95c9-fcd93aad3c7c","Type":"ContainerDied","Data":"1ea64a1801dd5f4d806c35660c47d2849deb9d33308a1112b3cc10348fb41328"} Dec 03 14:39:17.100617 master-0 kubenswrapper[4409]: I1203 14:39:17.100429 4409 scope.go:117] "RemoveContainer" containerID="245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a" Dec 03 14:39:17.100843 master-0 kubenswrapper[4409]: I1203 14:39:17.100700 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f689c85c4-fv97m" Dec 03 14:39:17.367662 master-0 kubenswrapper[4409]: I1203 14:39:17.367459 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:39:17.368672 master-0 kubenswrapper[4409]: I1203 14:39:17.368626 4409 scope.go:117] "RemoveContainer" containerID="245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a" Dec 03 14:39:17.372434 master-0 kubenswrapper[4409]: E1203 14:39:17.372388 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a\": container with ID starting with 245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a not found: ID does not exist" containerID="245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a" Dec 03 14:39:17.372563 master-0 kubenswrapper[4409]: I1203 14:39:17.372436 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a"} err="failed to get container status \"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a\": rpc error: code = NotFound desc = could not find container \"245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a\": container with ID starting with 245d4d20ff6649251200df89bd68fbac6ed6cd26f8eee29898325627fb08e20a not found: ID does not exist" Dec 03 14:39:17.373914 master-0 kubenswrapper[4409]: I1203 14:39:17.373871 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f689c85c4-fv97m"] Dec 03 14:39:17.825421 master-0 kubenswrapper[4409]: I1203 14:39:17.825356 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50223b50-44db-4dad-95c9-fcd93aad3c7c" path="/var/lib/kubelet/pods/50223b50-44db-4dad-95c9-fcd93aad3c7c/volumes" Dec 03 14:39:18.128955 master-0 kubenswrapper[4409]: I1203 14:39:18.128858 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zjrm9" event={"ID":"48c9f27f-e3f6-42bc-9160-ada125a1c1db","Type":"ContainerStarted","Data":"34bde4b02afee7cafd7031da897e47438ccfdd0c1679b73b3107aa6152c74af8"} Dec 03 14:39:18.153595 master-0 kubenswrapper[4409]: I1203 14:39:18.153460 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zjrm9" podStartSLOduration=1.665772167 podStartE2EDuration="3.153436403s" podCreationTimestamp="2025-12-03 14:39:15 +0000 UTC" firstStartedPulling="2025-12-03 14:39:15.924988057 +0000 UTC m=+788.252050563" lastFinishedPulling="2025-12-03 14:39:17.412652293 +0000 UTC m=+789.739714799" observedRunningTime="2025-12-03 14:39:18.151186459 +0000 UTC m=+790.478248965" watchObservedRunningTime="2025-12-03 14:39:18.153436403 +0000 UTC m=+790.480498929" Dec 03 14:39:18.857202 master-0 kubenswrapper[4409]: I1203 14:39:18.857115 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:18.859209 master-0 kubenswrapper[4409]: I1203 14:39:18.859176 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:19.136131 master-0 kubenswrapper[4409]: I1203 14:39:19.135831 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:19.137226 master-0 kubenswrapper[4409]: I1203 14:39:19.137175 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-pq5qt" Dec 03 14:39:25.502453 master-0 kubenswrapper[4409]: I1203 14:39:25.502359 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:25.502453 master-0 kubenswrapper[4409]: I1203 14:39:25.502448 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:25.550220 master-0 kubenswrapper[4409]: I1203 14:39:25.550156 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:26.254143 master-0 kubenswrapper[4409]: I1203 14:39:26.253942 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zjrm9" Dec 03 14:39:32.353724 master-0 kubenswrapper[4409]: I1203 14:39:32.353485 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925"] Dec 03 14:39:32.354673 master-0 kubenswrapper[4409]: E1203 14:39:32.354229 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50223b50-44db-4dad-95c9-fcd93aad3c7c" containerName="console" Dec 03 14:39:32.354673 master-0 kubenswrapper[4409]: I1203 14:39:32.354248 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="50223b50-44db-4dad-95c9-fcd93aad3c7c" containerName="console" Dec 03 14:39:32.354673 master-0 kubenswrapper[4409]: I1203 14:39:32.354467 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="50223b50-44db-4dad-95c9-fcd93aad3c7c" containerName="console" Dec 03 14:39:32.356519 master-0 kubenswrapper[4409]: I1203 14:39:32.356483 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.366355 master-0 kubenswrapper[4409]: I1203 14:39:32.366291 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.366550 master-0 kubenswrapper[4409]: I1203 14:39:32.366440 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25h9\" (UniqueName: \"kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.366550 master-0 kubenswrapper[4409]: I1203 14:39:32.366482 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.375434 master-0 kubenswrapper[4409]: I1203 14:39:32.371533 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925"] Dec 03 14:39:32.467510 master-0 kubenswrapper[4409]: I1203 14:39:32.467444 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25h9\" (UniqueName: \"kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.467510 master-0 kubenswrapper[4409]: I1203 14:39:32.467514 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.467788 master-0 kubenswrapper[4409]: I1203 14:39:32.467557 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.468070 master-0 kubenswrapper[4409]: I1203 14:39:32.468048 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.468785 master-0 kubenswrapper[4409]: I1203 14:39:32.468763 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.487716 master-0 kubenswrapper[4409]: I1203 14:39:32.487658 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25h9\" (UniqueName: \"kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9\") pod \"98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:32.679634 master-0 kubenswrapper[4409]: I1203 14:39:32.679431 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:33.104098 master-0 kubenswrapper[4409]: I1203 14:39:33.103974 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925"] Dec 03 14:39:33.112092 master-0 kubenswrapper[4409]: W1203 14:39:33.111982 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb70ff707_8bfb_49fe_8a92_177690d528bb.slice/crio-9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115 WatchSource:0}: Error finding container 9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115: Status 404 returned error can't find the container with id 9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115 Dec 03 14:39:33.278371 master-0 kubenswrapper[4409]: I1203 14:39:33.278302 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" event={"ID":"b70ff707-8bfb-49fe-8a92-177690d528bb","Type":"ContainerStarted","Data":"9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115"} Dec 03 14:39:34.290315 master-0 kubenswrapper[4409]: I1203 14:39:34.290220 4409 generic.go:334] "Generic (PLEG): container finished" podID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerID="a3610f4b89dfc5a82eaf1f09a5dd796e10c887fad8f7432462ad160a351bb267" exitCode=0 Dec 03 14:39:34.291075 master-0 kubenswrapper[4409]: I1203 14:39:34.290314 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" event={"ID":"b70ff707-8bfb-49fe-8a92-177690d528bb","Type":"ContainerDied","Data":"a3610f4b89dfc5a82eaf1f09a5dd796e10c887fad8f7432462ad160a351bb267"} Dec 03 14:39:36.314322 master-0 kubenswrapper[4409]: I1203 14:39:36.314239 4409 generic.go:334] "Generic (PLEG): container finished" podID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerID="1e9c72a5edcb04d395fe66ca641c9c3955870ba9e309acad40108cb5286f963e" exitCode=0 Dec 03 14:39:36.314322 master-0 kubenswrapper[4409]: I1203 14:39:36.314307 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" event={"ID":"b70ff707-8bfb-49fe-8a92-177690d528bb","Type":"ContainerDied","Data":"1e9c72a5edcb04d395fe66ca641c9c3955870ba9e309acad40108cb5286f963e"} Dec 03 14:39:37.323797 master-0 kubenswrapper[4409]: I1203 14:39:37.323692 4409 generic.go:334] "Generic (PLEG): container finished" podID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerID="9f22b884b2ffa3fe67251928085e3a6cbe5cdbbe28f563beaf1ba9beaeccf5cd" exitCode=0 Dec 03 14:39:37.323797 master-0 kubenswrapper[4409]: I1203 14:39:37.323755 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" event={"ID":"b70ff707-8bfb-49fe-8a92-177690d528bb","Type":"ContainerDied","Data":"9f22b884b2ffa3fe67251928085e3a6cbe5cdbbe28f563beaf1ba9beaeccf5cd"} Dec 03 14:39:38.703206 master-0 kubenswrapper[4409]: I1203 14:39:38.703090 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:38.806716 master-0 kubenswrapper[4409]: I1203 14:39:38.806638 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util\") pod \"b70ff707-8bfb-49fe-8a92-177690d528bb\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " Dec 03 14:39:38.807120 master-0 kubenswrapper[4409]: I1203 14:39:38.806848 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle\") pod \"b70ff707-8bfb-49fe-8a92-177690d528bb\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " Dec 03 14:39:38.807120 master-0 kubenswrapper[4409]: I1203 14:39:38.806905 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25h9\" (UniqueName: \"kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9\") pod \"b70ff707-8bfb-49fe-8a92-177690d528bb\" (UID: \"b70ff707-8bfb-49fe-8a92-177690d528bb\") " Dec 03 14:39:38.807512 master-0 kubenswrapper[4409]: I1203 14:39:38.807480 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle" (OuterVolumeSpecName: "bundle") pod "b70ff707-8bfb-49fe-8a92-177690d528bb" (UID: "b70ff707-8bfb-49fe-8a92-177690d528bb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:39:38.807763 master-0 kubenswrapper[4409]: I1203 14:39:38.807739 4409 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-bundle\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:38.811239 master-0 kubenswrapper[4409]: I1203 14:39:38.811216 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9" (OuterVolumeSpecName: "kube-api-access-m25h9") pod "b70ff707-8bfb-49fe-8a92-177690d528bb" (UID: "b70ff707-8bfb-49fe-8a92-177690d528bb"). InnerVolumeSpecName "kube-api-access-m25h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:39:38.909882 master-0 kubenswrapper[4409]: I1203 14:39:38.909725 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25h9\" (UniqueName: \"kubernetes.io/projected/b70ff707-8bfb-49fe-8a92-177690d528bb-kube-api-access-m25h9\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:39.227225 master-0 kubenswrapper[4409]: I1203 14:39:39.226875 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util" (OuterVolumeSpecName: "util") pod "b70ff707-8bfb-49fe-8a92-177690d528bb" (UID: "b70ff707-8bfb-49fe-8a92-177690d528bb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:39:39.321895 master-0 kubenswrapper[4409]: I1203 14:39:39.321391 4409 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b70ff707-8bfb-49fe-8a92-177690d528bb-util\") on node \"master-0\" DevicePath \"\"" Dec 03 14:39:39.349984 master-0 kubenswrapper[4409]: I1203 14:39:39.349689 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" event={"ID":"b70ff707-8bfb-49fe-8a92-177690d528bb","Type":"ContainerDied","Data":"9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115"} Dec 03 14:39:39.349984 master-0 kubenswrapper[4409]: I1203 14:39:39.349944 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f4cdfb1df99e918e889e9bd4cd5747471a5da3b50a1290a29fa1130377b4115" Dec 03 14:39:39.350500 master-0 kubenswrapper[4409]: I1203 14:39:39.350115 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864b8925" Dec 03 14:39:44.773359 master-0 kubenswrapper[4409]: I1203 14:39:44.773211 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: E1203 14:39:44.773795 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="pull" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: I1203 14:39:44.773821 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="pull" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: E1203 14:39:44.773858 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="extract" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: I1203 14:39:44.773866 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="extract" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: E1203 14:39:44.773885 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="util" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: I1203 14:39:44.773893 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="util" Dec 03 14:39:44.774412 master-0 kubenswrapper[4409]: I1203 14:39:44.774143 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70ff707-8bfb-49fe-8a92-177690d528bb" containerName="extract" Dec 03 14:39:44.775202 master-0 kubenswrapper[4409]: I1203 14:39:44.775170 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:44.807604 master-0 kubenswrapper[4409]: I1203 14:39:44.807522 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:39:44.866754 master-0 kubenswrapper[4409]: I1203 14:39:44.866653 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzfn\" (UniqueName: \"kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn\") pod \"openstack-operator-controller-operator-7dd5c7bb7c-clg7s\" (UID: \"17c98773-8b12-4c37-bf7e-538b9b8db902\") " pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:44.968624 master-0 kubenswrapper[4409]: I1203 14:39:44.968517 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzfn\" (UniqueName: \"kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn\") pod \"openstack-operator-controller-operator-7dd5c7bb7c-clg7s\" (UID: \"17c98773-8b12-4c37-bf7e-538b9b8db902\") " pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:44.991651 master-0 kubenswrapper[4409]: I1203 14:39:44.991560 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzfn\" (UniqueName: \"kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn\") pod \"openstack-operator-controller-operator-7dd5c7bb7c-clg7s\" (UID: \"17c98773-8b12-4c37-bf7e-538b9b8db902\") " pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:45.096114 master-0 kubenswrapper[4409]: I1203 14:39:45.096038 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:45.645672 master-0 kubenswrapper[4409]: I1203 14:39:45.645604 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:39:45.652970 master-0 kubenswrapper[4409]: W1203 14:39:45.652859 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17c98773_8b12_4c37_bf7e_538b9b8db902.slice/crio-2fe295e01fa682f062194b67adf7ce76af971af178345867ed76cffc2fddc2f0 WatchSource:0}: Error finding container 2fe295e01fa682f062194b67adf7ce76af971af178345867ed76cffc2fddc2f0: Status 404 returned error can't find the container with id 2fe295e01fa682f062194b67adf7ce76af971af178345867ed76cffc2fddc2f0 Dec 03 14:39:46.438208 master-0 kubenswrapper[4409]: I1203 14:39:46.437974 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" event={"ID":"17c98773-8b12-4c37-bf7e-538b9b8db902","Type":"ContainerStarted","Data":"2fe295e01fa682f062194b67adf7ce76af971af178345867ed76cffc2fddc2f0"} Dec 03 14:39:50.480235 master-0 kubenswrapper[4409]: I1203 14:39:50.480158 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" event={"ID":"17c98773-8b12-4c37-bf7e-538b9b8db902","Type":"ContainerStarted","Data":"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa"} Dec 03 14:39:50.482534 master-0 kubenswrapper[4409]: I1203 14:39:50.480425 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:50.524046 master-0 kubenswrapper[4409]: I1203 14:39:50.523907 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" podStartSLOduration=2.267124286 podStartE2EDuration="6.523879789s" podCreationTimestamp="2025-12-03 14:39:44 +0000 UTC" firstStartedPulling="2025-12-03 14:39:45.656150894 +0000 UTC m=+817.983213400" lastFinishedPulling="2025-12-03 14:39:49.912906377 +0000 UTC m=+822.239968903" observedRunningTime="2025-12-03 14:39:50.509943689 +0000 UTC m=+822.837006195" watchObservedRunningTime="2025-12-03 14:39:50.523879789 +0000 UTC m=+822.850942315" Dec 03 14:39:55.101511 master-0 kubenswrapper[4409]: I1203 14:39:55.101346 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:39:58.302331 master-0 kubenswrapper[4409]: I1203 14:39:58.302235 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9"] Dec 03 14:39:58.304285 master-0 kubenswrapper[4409]: I1203 14:39:58.304248 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:58.333848 master-0 kubenswrapper[4409]: I1203 14:39:58.333762 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9"] Dec 03 14:39:58.375036 master-0 kubenswrapper[4409]: I1203 14:39:58.372058 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8kt6\" (UniqueName: \"kubernetes.io/projected/5eef034d-0855-457d-a1e4-5a0df89fab08-kube-api-access-s8kt6\") pod \"openstack-operator-controller-operator-7b84d49558-t8dx9\" (UID: \"5eef034d-0855-457d-a1e4-5a0df89fab08\") " pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:58.474285 master-0 kubenswrapper[4409]: I1203 14:39:58.474199 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8kt6\" (UniqueName: \"kubernetes.io/projected/5eef034d-0855-457d-a1e4-5a0df89fab08-kube-api-access-s8kt6\") pod \"openstack-operator-controller-operator-7b84d49558-t8dx9\" (UID: \"5eef034d-0855-457d-a1e4-5a0df89fab08\") " pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:58.492795 master-0 kubenswrapper[4409]: I1203 14:39:58.492739 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8kt6\" (UniqueName: \"kubernetes.io/projected/5eef034d-0855-457d-a1e4-5a0df89fab08-kube-api-access-s8kt6\") pod \"openstack-operator-controller-operator-7b84d49558-t8dx9\" (UID: \"5eef034d-0855-457d-a1e4-5a0df89fab08\") " pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:58.621972 master-0 kubenswrapper[4409]: I1203 14:39:58.621885 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:59.140643 master-0 kubenswrapper[4409]: I1203 14:39:59.136773 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9"] Dec 03 14:39:59.572202 master-0 kubenswrapper[4409]: I1203 14:39:59.572143 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" event={"ID":"5eef034d-0855-457d-a1e4-5a0df89fab08","Type":"ContainerStarted","Data":"679a501b45c6993d8116f1d9cb789be98d3b506e241b433b0fca2783f28365ac"} Dec 03 14:39:59.572776 master-0 kubenswrapper[4409]: I1203 14:39:59.572212 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" event={"ID":"5eef034d-0855-457d-a1e4-5a0df89fab08","Type":"ContainerStarted","Data":"fad3934eb2c5c0c9d394f64d9ab50aa7ea5a2ef7eb595aa509f574cc931e2921"} Dec 03 14:39:59.572776 master-0 kubenswrapper[4409]: I1203 14:39:59.572261 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:39:59.605243 master-0 kubenswrapper[4409]: I1203 14:39:59.605112 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" podStartSLOduration=1.605035282 podStartE2EDuration="1.605035282s" podCreationTimestamp="2025-12-03 14:39:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:39:59.603582389 +0000 UTC m=+831.930644905" watchObservedRunningTime="2025-12-03 14:39:59.605035282 +0000 UTC m=+831.932097798" Dec 03 14:40:08.626034 master-0 kubenswrapper[4409]: I1203 14:40:08.625887 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b84d49558-t8dx9" Dec 03 14:40:08.723196 master-0 kubenswrapper[4409]: I1203 14:40:08.722918 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:40:08.723448 master-0 kubenswrapper[4409]: I1203 14:40:08.723303 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" podUID="17c98773-8b12-4c37-bf7e-538b9b8db902" containerName="operator" containerID="cri-o://aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa" gracePeriod=10 Dec 03 14:40:09.178137 master-0 kubenswrapper[4409]: I1203 14:40:09.178096 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:40:09.216663 master-0 kubenswrapper[4409]: I1203 14:40:09.216596 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzfn\" (UniqueName: \"kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn\") pod \"17c98773-8b12-4c37-bf7e-538b9b8db902\" (UID: \"17c98773-8b12-4c37-bf7e-538b9b8db902\") " Dec 03 14:40:09.223120 master-0 kubenswrapper[4409]: I1203 14:40:09.223024 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn" (OuterVolumeSpecName: "kube-api-access-9qzfn") pod "17c98773-8b12-4c37-bf7e-538b9b8db902" (UID: "17c98773-8b12-4c37-bf7e-538b9b8db902"). InnerVolumeSpecName "kube-api-access-9qzfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:40:09.319915 master-0 kubenswrapper[4409]: I1203 14:40:09.319824 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzfn\" (UniqueName: \"kubernetes.io/projected/17c98773-8b12-4c37-bf7e-538b9b8db902-kube-api-access-9qzfn\") on node \"master-0\" DevicePath \"\"" Dec 03 14:40:09.652244 master-0 kubenswrapper[4409]: I1203 14:40:09.652182 4409 generic.go:334] "Generic (PLEG): container finished" podID="17c98773-8b12-4c37-bf7e-538b9b8db902" containerID="aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa" exitCode=0 Dec 03 14:40:09.652244 master-0 kubenswrapper[4409]: I1203 14:40:09.652229 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" Dec 03 14:40:09.652972 master-0 kubenswrapper[4409]: I1203 14:40:09.652269 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" event={"ID":"17c98773-8b12-4c37-bf7e-538b9b8db902","Type":"ContainerDied","Data":"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa"} Dec 03 14:40:09.652972 master-0 kubenswrapper[4409]: I1203 14:40:09.652363 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s" event={"ID":"17c98773-8b12-4c37-bf7e-538b9b8db902","Type":"ContainerDied","Data":"2fe295e01fa682f062194b67adf7ce76af971af178345867ed76cffc2fddc2f0"} Dec 03 14:40:09.652972 master-0 kubenswrapper[4409]: I1203 14:40:09.652515 4409 scope.go:117] "RemoveContainer" containerID="aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa" Dec 03 14:40:09.681249 master-0 kubenswrapper[4409]: I1203 14:40:09.674944 4409 scope.go:117] "RemoveContainer" containerID="aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa" Dec 03 14:40:09.688287 master-0 kubenswrapper[4409]: E1203 14:40:09.682235 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa\": container with ID starting with aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa not found: ID does not exist" containerID="aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa" Dec 03 14:40:09.688287 master-0 kubenswrapper[4409]: I1203 14:40:09.682318 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa"} err="failed to get container status \"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa\": rpc error: code = NotFound desc = could not find container \"aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa\": container with ID starting with aa414b4740b961eebc653f751725cc7689160413fb3d4993fc61589bf03e15fa not found: ID does not exist" Dec 03 14:40:09.728027 master-0 kubenswrapper[4409]: I1203 14:40:09.725148 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:40:09.758428 master-0 kubenswrapper[4409]: I1203 14:40:09.758341 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-clg7s"] Dec 03 14:40:09.828328 master-0 kubenswrapper[4409]: I1203 14:40:09.828268 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c98773-8b12-4c37-bf7e-538b9b8db902" path="/var/lib/kubelet/pods/17c98773-8b12-4c37-bf7e-538b9b8db902/volumes" Dec 03 14:41:11.234606 master-0 kubenswrapper[4409]: I1203 14:41:11.234488 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f"] Dec 03 14:41:11.242137 master-0 kubenswrapper[4409]: E1203 14:41:11.242084 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c98773-8b12-4c37-bf7e-538b9b8db902" containerName="operator" Dec 03 14:41:11.242365 master-0 kubenswrapper[4409]: I1203 14:41:11.242151 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c98773-8b12-4c37-bf7e-538b9b8db902" containerName="operator" Dec 03 14:41:11.242523 master-0 kubenswrapper[4409]: I1203 14:41:11.242502 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c98773-8b12-4c37-bf7e-538b9b8db902" containerName="operator" Dec 03 14:41:11.244416 master-0 kubenswrapper[4409]: I1203 14:41:11.243847 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:11.266597 master-0 kubenswrapper[4409]: I1203 14:41:11.266530 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4"] Dec 03 14:41:11.268382 master-0 kubenswrapper[4409]: I1203 14:41:11.268349 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:11.291165 master-0 kubenswrapper[4409]: I1203 14:41:11.291108 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f"] Dec 03 14:41:11.294246 master-0 kubenswrapper[4409]: I1203 14:41:11.294124 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4"] Dec 03 14:41:11.321120 master-0 kubenswrapper[4409]: I1203 14:41:11.319535 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw"] Dec 03 14:41:11.338029 master-0 kubenswrapper[4409]: I1203 14:41:11.332923 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:11.338306 master-0 kubenswrapper[4409]: I1203 14:41:11.338152 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw"] Dec 03 14:41:11.351020 master-0 kubenswrapper[4409]: I1203 14:41:11.350934 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mww\" (UniqueName: \"kubernetes.io/projected/926503e0-c896-4b66-a633-80c3f986adc2-kube-api-access-p5mww\") pod \"cinder-operator-controller-manager-f8856dd79-rbqv4\" (UID: \"926503e0-c896-4b66-a633-80c3f986adc2\") " pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:11.351316 master-0 kubenswrapper[4409]: I1203 14:41:11.351035 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmtg\" (UniqueName: \"kubernetes.io/projected/e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b-kube-api-access-vsmtg\") pod \"barbican-operator-controller-manager-5cd89994b5-2gn4f\" (UID: \"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b\") " pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:11.382971 master-0 kubenswrapper[4409]: I1203 14:41:11.382916 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9"] Dec 03 14:41:11.384765 master-0 kubenswrapper[4409]: I1203 14:41:11.384731 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:11.465699 master-0 kubenswrapper[4409]: I1203 14:41:11.456747 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mww\" (UniqueName: \"kubernetes.io/projected/926503e0-c896-4b66-a633-80c3f986adc2-kube-api-access-p5mww\") pod \"cinder-operator-controller-manager-f8856dd79-rbqv4\" (UID: \"926503e0-c896-4b66-a633-80c3f986adc2\") " pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:11.465699 master-0 kubenswrapper[4409]: I1203 14:41:11.456917 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmtg\" (UniqueName: \"kubernetes.io/projected/e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b-kube-api-access-vsmtg\") pod \"barbican-operator-controller-manager-5cd89994b5-2gn4f\" (UID: \"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b\") " pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:11.465699 master-0 kubenswrapper[4409]: I1203 14:41:11.456960 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/89240d4a-2ca3-4f9a-9817-52023d7e2880-kube-api-access-bx9mj\") pod \"designate-operator-controller-manager-84bc9f68f5-s8bzw\" (UID: \"89240d4a-2ca3-4f9a-9817-52023d7e2880\") " pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:11.465699 master-0 kubenswrapper[4409]: I1203 14:41:11.456995 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xjt\" (UniqueName: \"kubernetes.io/projected/b1560eec-1871-4d57-ac22-c8f32d9ab53b-kube-api-access-d5xjt\") pod \"glance-operator-controller-manager-78cd4f7769-lmlf9\" (UID: \"b1560eec-1871-4d57-ac22-c8f32d9ab53b\") " pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:11.519819 master-0 kubenswrapper[4409]: I1203 14:41:11.519679 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9"] Dec 03 14:41:11.533035 master-0 kubenswrapper[4409]: I1203 14:41:11.525288 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mww\" (UniqueName: \"kubernetes.io/projected/926503e0-c896-4b66-a633-80c3f986adc2-kube-api-access-p5mww\") pod \"cinder-operator-controller-manager-f8856dd79-rbqv4\" (UID: \"926503e0-c896-4b66-a633-80c3f986adc2\") " pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:11.533035 master-0 kubenswrapper[4409]: I1203 14:41:11.528860 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmtg\" (UniqueName: \"kubernetes.io/projected/e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b-kube-api-access-vsmtg\") pod \"barbican-operator-controller-manager-5cd89994b5-2gn4f\" (UID: \"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b\") " pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:11.569289 master-0 kubenswrapper[4409]: I1203 14:41:11.569209 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/89240d4a-2ca3-4f9a-9817-52023d7e2880-kube-api-access-bx9mj\") pod \"designate-operator-controller-manager-84bc9f68f5-s8bzw\" (UID: \"89240d4a-2ca3-4f9a-9817-52023d7e2880\") " pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:11.569289 master-0 kubenswrapper[4409]: I1203 14:41:11.569295 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5xjt\" (UniqueName: \"kubernetes.io/projected/b1560eec-1871-4d57-ac22-c8f32d9ab53b-kube-api-access-d5xjt\") pod \"glance-operator-controller-manager-78cd4f7769-lmlf9\" (UID: \"b1560eec-1871-4d57-ac22-c8f32d9ab53b\") " pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:11.582406 master-0 kubenswrapper[4409]: I1203 14:41:11.582337 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft"] Dec 03 14:41:11.588026 master-0 kubenswrapper[4409]: I1203 14:41:11.584288 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:11.588026 master-0 kubenswrapper[4409]: I1203 14:41:11.584424 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:11.625029 master-0 kubenswrapper[4409]: I1203 14:41:11.618312 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5xjt\" (UniqueName: \"kubernetes.io/projected/b1560eec-1871-4d57-ac22-c8f32d9ab53b-kube-api-access-d5xjt\") pod \"glance-operator-controller-manager-78cd4f7769-lmlf9\" (UID: \"b1560eec-1871-4d57-ac22-c8f32d9ab53b\") " pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:11.647863 master-0 kubenswrapper[4409]: I1203 14:41:11.645084 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft"] Dec 03 14:41:11.647863 master-0 kubenswrapper[4409]: I1203 14:41:11.645661 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:11.662134 master-0 kubenswrapper[4409]: I1203 14:41:11.658200 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx9mj\" (UniqueName: \"kubernetes.io/projected/89240d4a-2ca3-4f9a-9817-52023d7e2880-kube-api-access-bx9mj\") pod \"designate-operator-controller-manager-84bc9f68f5-s8bzw\" (UID: \"89240d4a-2ca3-4f9a-9817-52023d7e2880\") " pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:11.675286 master-0 kubenswrapper[4409]: I1203 14:41:11.675220 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp"] Dec 03 14:41:11.675643 master-0 kubenswrapper[4409]: I1203 14:41:11.675447 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctm8m\" (UniqueName: \"kubernetes.io/projected/9c1d3765-48c3-4f50-9b63-827eeefde7db-kube-api-access-ctm8m\") pod \"heat-operator-controller-manager-7fd96594c7-24pft\" (UID: \"9c1d3765-48c3-4f50-9b63-827eeefde7db\") " pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:11.716462 master-0 kubenswrapper[4409]: I1203 14:41:11.714281 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:11.716462 master-0 kubenswrapper[4409]: I1203 14:41:11.715242 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:11.785945 master-0 kubenswrapper[4409]: I1203 14:41:11.785702 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw"] Dec 03 14:41:11.836993 master-0 kubenswrapper[4409]: I1203 14:41:11.832554 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctm8m\" (UniqueName: \"kubernetes.io/projected/9c1d3765-48c3-4f50-9b63-827eeefde7db-kube-api-access-ctm8m\") pod \"heat-operator-controller-manager-7fd96594c7-24pft\" (UID: \"9c1d3765-48c3-4f50-9b63-827eeefde7db\") " pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:11.836993 master-0 kubenswrapper[4409]: I1203 14:41:11.832727 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbz6t\" (UniqueName: \"kubernetes.io/projected/a3d007f9-8a26-45da-8913-b6de85ecacbd-kube-api-access-mbz6t\") pod \"horizon-operator-controller-manager-f6cc97788-v9zzp\" (UID: \"a3d007f9-8a26-45da-8913-b6de85ecacbd\") " pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:11.861980 master-0 kubenswrapper[4409]: I1203 14:41:11.840110 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:11.861980 master-0 kubenswrapper[4409]: I1203 14:41:11.842959 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:11.861980 master-0 kubenswrapper[4409]: I1203 14:41:11.855568 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 03 14:41:11.894762 master-0 kubenswrapper[4409]: I1203 14:41:11.878277 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctm8m\" (UniqueName: \"kubernetes.io/projected/9c1d3765-48c3-4f50-9b63-827eeefde7db-kube-api-access-ctm8m\") pod \"heat-operator-controller-manager-7fd96594c7-24pft\" (UID: \"9c1d3765-48c3-4f50-9b63-827eeefde7db\") " pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:11.894762 master-0 kubenswrapper[4409]: I1203 14:41:11.879655 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp"] Dec 03 14:41:11.894762 master-0 kubenswrapper[4409]: I1203 14:41:11.879718 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw"] Dec 03 14:41:11.894762 master-0 kubenswrapper[4409]: I1203 14:41:11.884393 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx"] Dec 03 14:41:11.894762 master-0 kubenswrapper[4409]: I1203 14:41:11.893230 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:11.936855 master-0 kubenswrapper[4409]: I1203 14:41:11.936795 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx"] Dec 03 14:41:11.944272 master-0 kubenswrapper[4409]: I1203 14:41:11.944226 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bzpt\" (UniqueName: \"kubernetes.io/projected/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-kube-api-access-5bzpt\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:11.944500 master-0 kubenswrapper[4409]: I1203 14:41:11.944309 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:11.944500 master-0 kubenswrapper[4409]: I1203 14:41:11.944365 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbz6t\" (UniqueName: \"kubernetes.io/projected/a3d007f9-8a26-45da-8913-b6de85ecacbd-kube-api-access-mbz6t\") pod \"horizon-operator-controller-manager-f6cc97788-v9zzp\" (UID: \"a3d007f9-8a26-45da-8913-b6de85ecacbd\") " pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:11.960089 master-0 kubenswrapper[4409]: I1203 14:41:11.955839 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq"] Dec 03 14:41:11.960089 master-0 kubenswrapper[4409]: I1203 14:41:11.959635 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:11.968951 master-0 kubenswrapper[4409]: I1203 14:41:11.967977 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq"] Dec 03 14:41:11.982086 master-0 kubenswrapper[4409]: I1203 14:41:11.972093 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbz6t\" (UniqueName: \"kubernetes.io/projected/a3d007f9-8a26-45da-8913-b6de85ecacbd-kube-api-access-mbz6t\") pod \"horizon-operator-controller-manager-f6cc97788-v9zzp\" (UID: \"a3d007f9-8a26-45da-8913-b6de85ecacbd\") " pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:11.982086 master-0 kubenswrapper[4409]: I1203 14:41:11.979129 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr"] Dec 03 14:41:11.982086 master-0 kubenswrapper[4409]: I1203 14:41:11.981537 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:12.017260 master-0 kubenswrapper[4409]: I1203 14:41:12.016076 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn"] Dec 03 14:41:12.024793 master-0 kubenswrapper[4409]: I1203 14:41:12.018425 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:12.028141 master-0 kubenswrapper[4409]: I1203 14:41:12.025069 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr"] Dec 03 14:41:12.042389 master-0 kubenswrapper[4409]: I1203 14:41:12.041120 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd"] Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: I1203 14:41:12.042667 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: I1203 14:41:12.045425 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzg9\" (UniqueName: \"kubernetes.io/projected/e7b297c8-bfae-4d29-9698-9850cefb3c6a-kube-api-access-tzzg9\") pod \"keystone-operator-controller-manager-58b8dcc5fb-crqhq\" (UID: \"e7b297c8-bfae-4d29-9698-9850cefb3c6a\") " pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: I1203 14:41:12.045521 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bzpt\" (UniqueName: \"kubernetes.io/projected/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-kube-api-access-5bzpt\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: I1203 14:41:12.045581 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: I1203 14:41:12.045617 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q8xf\" (UniqueName: \"kubernetes.io/projected/48b8028a-5751-4806-bb6f-9ba67ff58d60-kube-api-access-8q8xf\") pod \"ironic-operator-controller-manager-7c9bfd6967-s2sbx\" (UID: \"48b8028a-5751-4806-bb6f-9ba67ff58d60\") " pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: E1203 14:41:12.045965 4409 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:12.050180 master-0 kubenswrapper[4409]: E1203 14:41:12.046057 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert podName:1cc58fc3-ce7e-459f-a9c9-27bf4e733e48 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:12.545999482 +0000 UTC m=+904.873061988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert") pod "infra-operator-controller-manager-7d9c9d7fd8-f4ttw" (UID: "1cc58fc3-ce7e-459f-a9c9-27bf4e733e48") : secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:12.068144 master-0 kubenswrapper[4409]: I1203 14:41:12.068079 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx"] Dec 03 14:41:12.075328 master-0 kubenswrapper[4409]: I1203 14:41:12.075275 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn"] Dec 03 14:41:12.075704 master-0 kubenswrapper[4409]: I1203 14:41:12.075656 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:12.086914 master-0 kubenswrapper[4409]: I1203 14:41:12.086651 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx"] Dec 03 14:41:12.096259 master-0 kubenswrapper[4409]: I1203 14:41:12.096201 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd"] Dec 03 14:41:12.099642 master-0 kubenswrapper[4409]: I1203 14:41:12.099612 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bzpt\" (UniqueName: \"kubernetes.io/projected/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-kube-api-access-5bzpt\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:12.110477 master-0 kubenswrapper[4409]: I1203 14:41:12.110406 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9"] Dec 03 14:41:12.113169 master-0 kubenswrapper[4409]: I1203 14:41:12.113131 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:12.139074 master-0 kubenswrapper[4409]: I1203 14:41:12.138833 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:12.149125 master-0 kubenswrapper[4409]: I1203 14:41:12.149087 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q8xf\" (UniqueName: \"kubernetes.io/projected/48b8028a-5751-4806-bb6f-9ba67ff58d60-kube-api-access-8q8xf\") pod \"ironic-operator-controller-manager-7c9bfd6967-s2sbx\" (UID: \"48b8028a-5751-4806-bb6f-9ba67ff58d60\") " pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:12.149364 master-0 kubenswrapper[4409]: I1203 14:41:12.149146 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vzjw\" (UniqueName: \"kubernetes.io/projected/6d1f8e16-846e-455b-b7e6-670be5612c5c-kube-api-access-2vzjw\") pod \"manila-operator-controller-manager-56f9fbf74b-p2kpr\" (UID: \"6d1f8e16-846e-455b-b7e6-670be5612c5c\") " pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:12.149364 master-0 kubenswrapper[4409]: I1203 14:41:12.149195 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzg9\" (UniqueName: \"kubernetes.io/projected/e7b297c8-bfae-4d29-9698-9850cefb3c6a-kube-api-access-tzzg9\") pod \"keystone-operator-controller-manager-58b8dcc5fb-crqhq\" (UID: \"e7b297c8-bfae-4d29-9698-9850cefb3c6a\") " pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:12.149364 master-0 kubenswrapper[4409]: I1203 14:41:12.149279 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjbp\" (UniqueName: \"kubernetes.io/projected/0ab1be43-6804-4cef-9727-2775f768c351-kube-api-access-ffjbp\") pod \"neutron-operator-controller-manager-7cdd6b54fb-d2kpd\" (UID: \"0ab1be43-6804-4cef-9727-2775f768c351\") " pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:12.149364 master-0 kubenswrapper[4409]: I1203 14:41:12.149348 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcfp\" (UniqueName: \"kubernetes.io/projected/40f3ac32-c13b-4d65-96ca-08a77e2bd66a-kube-api-access-pwcfp\") pod \"mariadb-operator-controller-manager-647d75769b-l99kn\" (UID: \"40f3ac32-c13b-4d65-96ca-08a77e2bd66a\") " pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:12.154527 master-0 kubenswrapper[4409]: I1203 14:41:12.154466 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:12.163114 master-0 kubenswrapper[4409]: I1203 14:41:12.163044 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9"] Dec 03 14:41:12.168304 master-0 kubenswrapper[4409]: I1203 14:41:12.168216 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q8xf\" (UniqueName: \"kubernetes.io/projected/48b8028a-5751-4806-bb6f-9ba67ff58d60-kube-api-access-8q8xf\") pod \"ironic-operator-controller-manager-7c9bfd6967-s2sbx\" (UID: \"48b8028a-5751-4806-bb6f-9ba67ff58d60\") " pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:12.175341 master-0 kubenswrapper[4409]: I1203 14:41:12.175289 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5"] Dec 03 14:41:12.178002 master-0 kubenswrapper[4409]: I1203 14:41:12.177939 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.179453 master-0 kubenswrapper[4409]: I1203 14:41:12.179425 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 03 14:41:12.183676 master-0 kubenswrapper[4409]: I1203 14:41:12.183631 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl"] Dec 03 14:41:12.185726 master-0 kubenswrapper[4409]: I1203 14:41:12.185683 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:12.186483 master-0 kubenswrapper[4409]: I1203 14:41:12.186352 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzg9\" (UniqueName: \"kubernetes.io/projected/e7b297c8-bfae-4d29-9698-9850cefb3c6a-kube-api-access-tzzg9\") pod \"keystone-operator-controller-manager-58b8dcc5fb-crqhq\" (UID: \"e7b297c8-bfae-4d29-9698-9850cefb3c6a\") " pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:12.192827 master-0 kubenswrapper[4409]: I1203 14:41:12.192167 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl"] Dec 03 14:41:12.202168 master-0 kubenswrapper[4409]: I1203 14:41:12.202121 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5"] Dec 03 14:41:12.215828 master-0 kubenswrapper[4409]: I1203 14:41:12.215734 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7"] Dec 03 14:41:12.218185 master-0 kubenswrapper[4409]: I1203 14:41:12.218141 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:12.251866 master-0 kubenswrapper[4409]: I1203 14:41:12.251798 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djfm\" (UniqueName: \"kubernetes.io/projected/dae6346f-01f4-4206-a011-e2cb63760061-kube-api-access-7djfm\") pod \"nova-operator-controller-manager-865fc86d5b-pk8dx\" (UID: \"dae6346f-01f4-4206-a011-e2cb63760061\") " pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:12.252482 master-0 kubenswrapper[4409]: I1203 14:41:12.251911 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjbp\" (UniqueName: \"kubernetes.io/projected/0ab1be43-6804-4cef-9727-2775f768c351-kube-api-access-ffjbp\") pod \"neutron-operator-controller-manager-7cdd6b54fb-d2kpd\" (UID: \"0ab1be43-6804-4cef-9727-2775f768c351\") " pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:12.252482 master-0 kubenswrapper[4409]: I1203 14:41:12.251975 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7"] Dec 03 14:41:12.252482 master-0 kubenswrapper[4409]: I1203 14:41:12.252046 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwcfp\" (UniqueName: \"kubernetes.io/projected/40f3ac32-c13b-4d65-96ca-08a77e2bd66a-kube-api-access-pwcfp\") pod \"mariadb-operator-controller-manager-647d75769b-l99kn\" (UID: \"40f3ac32-c13b-4d65-96ca-08a77e2bd66a\") " pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:12.252482 master-0 kubenswrapper[4409]: I1203 14:41:12.252137 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vzjw\" (UniqueName: \"kubernetes.io/projected/6d1f8e16-846e-455b-b7e6-670be5612c5c-kube-api-access-2vzjw\") pod \"manila-operator-controller-manager-56f9fbf74b-p2kpr\" (UID: \"6d1f8e16-846e-455b-b7e6-670be5612c5c\") " pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:12.252482 master-0 kubenswrapper[4409]: I1203 14:41:12.252203 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfdbz\" (UniqueName: \"kubernetes.io/projected/5a439dbb-6458-4800-aade-ea490b402662-kube-api-access-vfdbz\") pod \"octavia-operator-controller-manager-845b79dc4f-cj7z9\" (UID: \"5a439dbb-6458-4800-aade-ea490b402662\") " pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:12.261878 master-0 kubenswrapper[4409]: I1203 14:41:12.261622 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-696b999796-zd6d2"] Dec 03 14:41:12.264444 master-0 kubenswrapper[4409]: I1203 14:41:12.264265 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:12.277098 master-0 kubenswrapper[4409]: I1203 14:41:12.276033 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vzjw\" (UniqueName: \"kubernetes.io/projected/6d1f8e16-846e-455b-b7e6-670be5612c5c-kube-api-access-2vzjw\") pod \"manila-operator-controller-manager-56f9fbf74b-p2kpr\" (UID: \"6d1f8e16-846e-455b-b7e6-670be5612c5c\") " pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:12.280965 master-0 kubenswrapper[4409]: I1203 14:41:12.279790 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjbp\" (UniqueName: \"kubernetes.io/projected/0ab1be43-6804-4cef-9727-2775f768c351-kube-api-access-ffjbp\") pod \"neutron-operator-controller-manager-7cdd6b54fb-d2kpd\" (UID: \"0ab1be43-6804-4cef-9727-2775f768c351\") " pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:12.280965 master-0 kubenswrapper[4409]: I1203 14:41:12.280412 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:12.280965 master-0 kubenswrapper[4409]: I1203 14:41:12.280603 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwcfp\" (UniqueName: \"kubernetes.io/projected/40f3ac32-c13b-4d65-96ca-08a77e2bd66a-kube-api-access-pwcfp\") pod \"mariadb-operator-controller-manager-647d75769b-l99kn\" (UID: \"40f3ac32-c13b-4d65-96ca-08a77e2bd66a\") " pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:12.283793 master-0 kubenswrapper[4409]: I1203 14:41:12.283761 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:12.315668 master-0 kubenswrapper[4409]: I1203 14:41:12.315185 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4"] Dec 03 14:41:12.317791 master-0 kubenswrapper[4409]: I1203 14:41:12.317757 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:12.326752 master-0 kubenswrapper[4409]: I1203 14:41:12.326410 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-696b999796-zd6d2"] Dec 03 14:41:12.344245 master-0 kubenswrapper[4409]: I1203 14:41:12.341935 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354478 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354617 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfdbz\" (UniqueName: \"kubernetes.io/projected/5a439dbb-6458-4800-aade-ea490b402662-kube-api-access-vfdbz\") pod \"octavia-operator-controller-manager-845b79dc4f-cj7z9\" (UID: \"5a439dbb-6458-4800-aade-ea490b402662\") " pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354657 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztt4x\" (UniqueName: \"kubernetes.io/projected/1176c675-89df-4778-94ed-14f77f9cc668-kube-api-access-ztt4x\") pod \"placement-operator-controller-manager-6b64f6f645-zgkn7\" (UID: \"1176c675-89df-4778-94ed-14f77f9cc668\") " pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354678 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q7n4\" (UniqueName: \"kubernetes.io/projected/bc824ed3-cfbf-4219-816e-f4a7539359b0-kube-api-access-2q7n4\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354701 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djfm\" (UniqueName: \"kubernetes.io/projected/dae6346f-01f4-4206-a011-e2cb63760061-kube-api-access-7djfm\") pod \"nova-operator-controller-manager-865fc86d5b-pk8dx\" (UID: \"dae6346f-01f4-4206-a011-e2cb63760061\") " pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:12.355019 master-0 kubenswrapper[4409]: I1203 14:41:12.354732 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/3c7ef074-1a45-491c-9872-76f6a6315362-kube-api-access-pjbr2\") pod \"ovn-operator-controller-manager-647f96877-kf2cl\" (UID: \"3c7ef074-1a45-491c-9872-76f6a6315362\") " pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:12.357722 master-0 kubenswrapper[4409]: I1203 14:41:12.357676 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4"] Dec 03 14:41:12.380251 master-0 kubenswrapper[4409]: I1203 14:41:12.380195 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr"] Dec 03 14:41:12.382279 master-0 kubenswrapper[4409]: I1203 14:41:12.382224 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djfm\" (UniqueName: \"kubernetes.io/projected/dae6346f-01f4-4206-a011-e2cb63760061-kube-api-access-7djfm\") pod \"nova-operator-controller-manager-865fc86d5b-pk8dx\" (UID: \"dae6346f-01f4-4206-a011-e2cb63760061\") " pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:12.383811 master-0 kubenswrapper[4409]: I1203 14:41:12.383778 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:12.390485 master-0 kubenswrapper[4409]: I1203 14:41:12.390427 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfdbz\" (UniqueName: \"kubernetes.io/projected/5a439dbb-6458-4800-aade-ea490b402662-kube-api-access-vfdbz\") pod \"octavia-operator-controller-manager-845b79dc4f-cj7z9\" (UID: \"5a439dbb-6458-4800-aade-ea490b402662\") " pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:12.398099 master-0 kubenswrapper[4409]: I1203 14:41:12.398061 4409 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:41:12.420068 master-0 kubenswrapper[4409]: I1203 14:41:12.410074 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr"] Dec 03 14:41:12.431538 master-0 kubenswrapper[4409]: I1203 14:41:12.431488 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:12.440504 master-0 kubenswrapper[4409]: I1203 14:41:12.440413 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp"] Dec 03 14:41:12.443183 master-0 kubenswrapper[4409]: I1203 14:41:12.443106 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:12.454853 master-0 kubenswrapper[4409]: I1203 14:41:12.454781 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp"] Dec 03 14:41:12.457216 master-0 kubenswrapper[4409]: I1203 14:41:12.457154 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztt4x\" (UniqueName: \"kubernetes.io/projected/1176c675-89df-4778-94ed-14f77f9cc668-kube-api-access-ztt4x\") pod \"placement-operator-controller-manager-6b64f6f645-zgkn7\" (UID: \"1176c675-89df-4778-94ed-14f77f9cc668\") " pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:12.457322 master-0 kubenswrapper[4409]: I1203 14:41:12.457257 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q7n4\" (UniqueName: \"kubernetes.io/projected/bc824ed3-cfbf-4219-816e-f4a7539359b0-kube-api-access-2q7n4\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.457322 master-0 kubenswrapper[4409]: I1203 14:41:12.457314 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzx4\" (UniqueName: \"kubernetes.io/projected/e1c992ff-f2a7-4938-977e-fd4ab03bde82-kube-api-access-crzx4\") pod \"swift-operator-controller-manager-696b999796-zd6d2\" (UID: \"e1c992ff-f2a7-4938-977e-fd4ab03bde82\") " pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:12.457571 master-0 kubenswrapper[4409]: I1203 14:41:12.457539 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/3c7ef074-1a45-491c-9872-76f6a6315362-kube-api-access-pjbr2\") pod \"ovn-operator-controller-manager-647f96877-kf2cl\" (UID: \"3c7ef074-1a45-491c-9872-76f6a6315362\") " pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:12.457671 master-0 kubenswrapper[4409]: I1203 14:41:12.457651 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.457726 master-0 kubenswrapper[4409]: I1203 14:41:12.457707 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qmt\" (UniqueName: \"kubernetes.io/projected/5756531c-95ec-4fb0-a406-f0d3e64d8795-kube-api-access-69qmt\") pod \"telemetry-operator-controller-manager-7b5867bfc7-b7fd4\" (UID: \"5756531c-95ec-4fb0-a406-f0d3e64d8795\") " pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:12.457961 master-0 kubenswrapper[4409]: I1203 14:41:12.457914 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:12.459480 master-0 kubenswrapper[4409]: E1203 14:41:12.458774 4409 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:12.459480 master-0 kubenswrapper[4409]: E1203 14:41:12.458830 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert podName:bc824ed3-cfbf-4219-816e-f4a7539359b0 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:12.958812873 +0000 UTC m=+905.285875369 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert") pod "openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" (UID: "bc824ed3-cfbf-4219-816e-f4a7539359b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:12.461892 master-0 kubenswrapper[4409]: I1203 14:41:12.461837 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" event={"ID":"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b","Type":"ContainerStarted","Data":"f0a3cb576aec16689e5af30554d2b3196800269d3e7ca524d6b1e74390973091"} Dec 03 14:41:12.480312 master-0 kubenswrapper[4409]: I1203 14:41:12.480267 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztt4x\" (UniqueName: \"kubernetes.io/projected/1176c675-89df-4778-94ed-14f77f9cc668-kube-api-access-ztt4x\") pod \"placement-operator-controller-manager-6b64f6f645-zgkn7\" (UID: \"1176c675-89df-4778-94ed-14f77f9cc668\") " pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:12.481729 master-0 kubenswrapper[4409]: I1203 14:41:12.481634 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q7n4\" (UniqueName: \"kubernetes.io/projected/bc824ed3-cfbf-4219-816e-f4a7539359b0-kube-api-access-2q7n4\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.489363 master-0 kubenswrapper[4409]: I1203 14:41:12.485245 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbr2\" (UniqueName: \"kubernetes.io/projected/3c7ef074-1a45-491c-9872-76f6a6315362-kube-api-access-pjbr2\") pod \"ovn-operator-controller-manager-647f96877-kf2cl\" (UID: \"3c7ef074-1a45-491c-9872-76f6a6315362\") " pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:12.489363 master-0 kubenswrapper[4409]: I1203 14:41:12.488694 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:12.508812 master-0 kubenswrapper[4409]: I1203 14:41:12.508736 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:12.539169 master-0 kubenswrapper[4409]: I1203 14:41:12.539089 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml"] Dec 03 14:41:12.542337 master-0 kubenswrapper[4409]: I1203 14:41:12.542145 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.553215 master-0 kubenswrapper[4409]: I1203 14:41:12.552345 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 03 14:41:12.553215 master-0 kubenswrapper[4409]: I1203 14:41:12.552582 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 03 14:41:12.563520 master-0 kubenswrapper[4409]: I1203 14:41:12.563468 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzx4\" (UniqueName: \"kubernetes.io/projected/e1c992ff-f2a7-4938-977e-fd4ab03bde82-kube-api-access-crzx4\") pod \"swift-operator-controller-manager-696b999796-zd6d2\" (UID: \"e1c992ff-f2a7-4938-977e-fd4ab03bde82\") " pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:12.563720 master-0 kubenswrapper[4409]: I1203 14:41:12.563677 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qmt\" (UniqueName: \"kubernetes.io/projected/5756531c-95ec-4fb0-a406-f0d3e64d8795-kube-api-access-69qmt\") pod \"telemetry-operator-controller-manager-7b5867bfc7-b7fd4\" (UID: \"5756531c-95ec-4fb0-a406-f0d3e64d8795\") " pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:12.563771 master-0 kubenswrapper[4409]: I1203 14:41:12.563729 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:12.563771 master-0 kubenswrapper[4409]: I1203 14:41:12.563763 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42ch5\" (UniqueName: \"kubernetes.io/projected/5d773bda-73fe-4e57-aed9-fc593e93a67f-kube-api-access-42ch5\") pod \"watcher-operator-controller-manager-6b9b669fdb-tvkgp\" (UID: \"5d773bda-73fe-4e57-aed9-fc593e93a67f\") " pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:12.563844 master-0 kubenswrapper[4409]: I1203 14:41:12.563814 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxsl\" (UniqueName: \"kubernetes.io/projected/7bd1f9d6-8524-4eea-beb1-a4cedf159998-kube-api-access-fpxsl\") pod \"test-operator-controller-manager-57dfcdd5b8-vmnjr\" (UID: \"7bd1f9d6-8524-4eea-beb1-a4cedf159998\") " pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:12.566819 master-0 kubenswrapper[4409]: E1203 14:41:12.566733 4409 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:12.566890 master-0 kubenswrapper[4409]: E1203 14:41:12.566868 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert podName:1cc58fc3-ce7e-459f-a9c9-27bf4e733e48 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:13.566833846 +0000 UTC m=+905.893896352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert") pod "infra-operator-controller-manager-7d9c9d7fd8-f4ttw" (UID: "1cc58fc3-ce7e-459f-a9c9-27bf4e733e48") : secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:12.567287 master-0 kubenswrapper[4409]: W1203 14:41:12.567238 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod926503e0_c896_4b66_a633_80c3f986adc2.slice/crio-270aa8ace65af2126148705d1e66cac1f2fd51d5c766a5e353b5722ac5dba8fa WatchSource:0}: Error finding container 270aa8ace65af2126148705d1e66cac1f2fd51d5c766a5e353b5722ac5dba8fa: Status 404 returned error can't find the container with id 270aa8ace65af2126148705d1e66cac1f2fd51d5c766a5e353b5722ac5dba8fa Dec 03 14:41:12.577350 master-0 kubenswrapper[4409]: I1203 14:41:12.570088 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:12.577350 master-0 kubenswrapper[4409]: I1203 14:41:12.573153 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml"] Dec 03 14:41:12.588772 master-0 kubenswrapper[4409]: I1203 14:41:12.586866 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qmt\" (UniqueName: \"kubernetes.io/projected/5756531c-95ec-4fb0-a406-f0d3e64d8795-kube-api-access-69qmt\") pod \"telemetry-operator-controller-manager-7b5867bfc7-b7fd4\" (UID: \"5756531c-95ec-4fb0-a406-f0d3e64d8795\") " pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:12.592671 master-0 kubenswrapper[4409]: I1203 14:41:12.592612 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:12.596035 master-0 kubenswrapper[4409]: I1203 14:41:12.593057 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzx4\" (UniqueName: \"kubernetes.io/projected/e1c992ff-f2a7-4938-977e-fd4ab03bde82-kube-api-access-crzx4\") pod \"swift-operator-controller-manager-696b999796-zd6d2\" (UID: \"e1c992ff-f2a7-4938-977e-fd4ab03bde82\") " pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:12.620070 master-0 kubenswrapper[4409]: I1203 14:41:12.617424 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:12.631518 master-0 kubenswrapper[4409]: I1203 14:41:12.631467 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt"] Dec 03 14:41:12.645974 master-0 kubenswrapper[4409]: W1203 14:41:12.645925 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89240d4a_2ca3_4f9a_9817_52023d7e2880.slice/crio-43d79eeb7c306aad6dcf20258e612d97c49afa5c277e6f37684d75a41d505f57 WatchSource:0}: Error finding container 43d79eeb7c306aad6dcf20258e612d97c49afa5c277e6f37684d75a41d505f57: Status 404 returned error can't find the container with id 43d79eeb7c306aad6dcf20258e612d97c49afa5c277e6f37684d75a41d505f57 Dec 03 14:41:12.647163 master-0 kubenswrapper[4409]: I1203 14:41:12.647066 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" Dec 03 14:41:12.647951 master-0 kubenswrapper[4409]: I1203 14:41:12.647683 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:12.669109 master-0 kubenswrapper[4409]: I1203 14:41:12.668866 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.669109 master-0 kubenswrapper[4409]: I1203 14:41:12.669045 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnx2\" (UniqueName: \"kubernetes.io/projected/a07832b5-fda1-4cac-acec-0354fbb8e91a-kube-api-access-fqnx2\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.669612 master-0 kubenswrapper[4409]: I1203 14:41:12.669139 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.669612 master-0 kubenswrapper[4409]: I1203 14:41:12.669266 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42ch5\" (UniqueName: \"kubernetes.io/projected/5d773bda-73fe-4e57-aed9-fc593e93a67f-kube-api-access-42ch5\") pod \"watcher-operator-controller-manager-6b9b669fdb-tvkgp\" (UID: \"5d773bda-73fe-4e57-aed9-fc593e93a67f\") " pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:12.669612 master-0 kubenswrapper[4409]: I1203 14:41:12.669320 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxsl\" (UniqueName: \"kubernetes.io/projected/7bd1f9d6-8524-4eea-beb1-a4cedf159998-kube-api-access-fpxsl\") pod \"test-operator-controller-manager-57dfcdd5b8-vmnjr\" (UID: \"7bd1f9d6-8524-4eea-beb1-a4cedf159998\") " pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:12.696568 master-0 kubenswrapper[4409]: I1203 14:41:12.696479 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxsl\" (UniqueName: \"kubernetes.io/projected/7bd1f9d6-8524-4eea-beb1-a4cedf159998-kube-api-access-fpxsl\") pod \"test-operator-controller-manager-57dfcdd5b8-vmnjr\" (UID: \"7bd1f9d6-8524-4eea-beb1-a4cedf159998\") " pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:12.704543 master-0 kubenswrapper[4409]: I1203 14:41:12.704504 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42ch5\" (UniqueName: \"kubernetes.io/projected/5d773bda-73fe-4e57-aed9-fc593e93a67f-kube-api-access-42ch5\") pod \"watcher-operator-controller-manager-6b9b669fdb-tvkgp\" (UID: \"5d773bda-73fe-4e57-aed9-fc593e93a67f\") " pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:12.704755 master-0 kubenswrapper[4409]: I1203 14:41:12.704578 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt"] Dec 03 14:41:12.707426 master-0 kubenswrapper[4409]: I1203 14:41:12.707400 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: I1203 14:41:12.774156 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: I1203 14:41:12.774302 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnx2\" (UniqueName: \"kubernetes.io/projected/a07832b5-fda1-4cac-acec-0354fbb8e91a-kube-api-access-fqnx2\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: I1203 14:41:12.774387 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjnhm\" (UniqueName: \"kubernetes.io/projected/3b523887-d3de-40c7-ba4e-52eec568c0f0-kube-api-access-wjnhm\") pod \"rabbitmq-cluster-operator-manager-78955d896f-bbcpt\" (UID: \"3b523887-d3de-40c7-ba4e-52eec568c0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: E1203 14:41:12.774303 4409 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: E1203 14:41:12.774520 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:13.274493608 +0000 UTC m=+905.601556114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "metrics-server-cert" not found Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: E1203 14:41:12.774552 4409 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: I1203 14:41:12.774434 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.776130 master-0 kubenswrapper[4409]: E1203 14:41:12.774594 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:13.27458285 +0000 UTC m=+905.601645356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "webhook-server-cert" not found Dec 03 14:41:12.803129 master-0 kubenswrapper[4409]: I1203 14:41:12.799231 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:12.816387 master-0 kubenswrapper[4409]: I1203 14:41:12.816354 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnx2\" (UniqueName: \"kubernetes.io/projected/a07832b5-fda1-4cac-acec-0354fbb8e91a-kube-api-access-fqnx2\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:12.841429 master-0 kubenswrapper[4409]: W1203 14:41:12.834526 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1560eec_1871_4d57_ac22_c8f32d9ab53b.slice/crio-b778aef22c5fe3669cc66c3a2e4e3e2cd7c7845b35062028ddc0010d398d64a9 WatchSource:0}: Error finding container b778aef22c5fe3669cc66c3a2e4e3e2cd7c7845b35062028ddc0010d398d64a9: Status 404 returned error can't find the container with id b778aef22c5fe3669cc66c3a2e4e3e2cd7c7845b35062028ddc0010d398d64a9 Dec 03 14:41:12.881180 master-0 kubenswrapper[4409]: I1203 14:41:12.878191 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjnhm\" (UniqueName: \"kubernetes.io/projected/3b523887-d3de-40c7-ba4e-52eec568c0f0-kube-api-access-wjnhm\") pod \"rabbitmq-cluster-operator-manager-78955d896f-bbcpt\" (UID: \"3b523887-d3de-40c7-ba4e-52eec568c0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" Dec 03 14:41:12.910842 master-0 kubenswrapper[4409]: I1203 14:41:12.905847 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjnhm\" (UniqueName: \"kubernetes.io/projected/3b523887-d3de-40c7-ba4e-52eec568c0f0-kube-api-access-wjnhm\") pod \"rabbitmq-cluster-operator-manager-78955d896f-bbcpt\" (UID: \"3b523887-d3de-40c7-ba4e-52eec568c0f0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" Dec 03 14:41:12.916859 master-0 kubenswrapper[4409]: I1203 14:41:12.913055 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f"] Dec 03 14:41:12.981628 master-0 kubenswrapper[4409]: I1203 14:41:12.980777 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:12.981628 master-0 kubenswrapper[4409]: E1203 14:41:12.981047 4409 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:12.981628 master-0 kubenswrapper[4409]: E1203 14:41:12.981135 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert podName:bc824ed3-cfbf-4219-816e-f4a7539359b0 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:13.981111189 +0000 UTC m=+906.308173695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert") pod "openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" (UID: "bc824ed3-cfbf-4219-816e-f4a7539359b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:13.005987 master-0 kubenswrapper[4409]: I1203 14:41:13.005900 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4"] Dec 03 14:41:13.025828 master-0 kubenswrapper[4409]: I1203 14:41:13.025763 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw"] Dec 03 14:41:13.041926 master-0 kubenswrapper[4409]: I1203 14:41:13.041844 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9"] Dec 03 14:41:13.070953 master-0 kubenswrapper[4409]: I1203 14:41:13.070790 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp"] Dec 03 14:41:13.082120 master-0 kubenswrapper[4409]: I1203 14:41:13.080409 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft"] Dec 03 14:41:13.095056 master-0 kubenswrapper[4409]: I1203 14:41:13.094959 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx"] Dec 03 14:41:13.109223 master-0 kubenswrapper[4409]: I1203 14:41:13.109055 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" Dec 03 14:41:13.265109 master-0 kubenswrapper[4409]: I1203 14:41:13.258878 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn"] Dec 03 14:41:13.288133 master-0 kubenswrapper[4409]: I1203 14:41:13.288058 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:13.289526 master-0 kubenswrapper[4409]: E1203 14:41:13.289481 4409 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 14:41:13.289606 master-0 kubenswrapper[4409]: I1203 14:41:13.289560 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:13.289653 master-0 kubenswrapper[4409]: E1203 14:41:13.289589 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:14.28956714 +0000 UTC m=+906.616629646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "metrics-server-cert" not found Dec 03 14:41:13.290158 master-0 kubenswrapper[4409]: E1203 14:41:13.290133 4409 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 14:41:13.290255 master-0 kubenswrapper[4409]: E1203 14:41:13.290234 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:14.290209479 +0000 UTC m=+906.617271975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "webhook-server-cert" not found Dec 03 14:41:13.396461 master-0 kubenswrapper[4409]: I1203 14:41:13.396329 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq"] Dec 03 14:41:13.412074 master-0 kubenswrapper[4409]: W1203 14:41:13.412016 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b297c8_bfae_4d29_9698_9850cefb3c6a.slice/crio-7135eaa7b3f4182f0cf896faaae62c59f387fc600bace989d69d33b4df186082 WatchSource:0}: Error finding container 7135eaa7b3f4182f0cf896faaae62c59f387fc600bace989d69d33b4df186082: Status 404 returned error can't find the container with id 7135eaa7b3f4182f0cf896faaae62c59f387fc600bace989d69d33b4df186082 Dec 03 14:41:13.425693 master-0 kubenswrapper[4409]: I1203 14:41:13.425625 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr"] Dec 03 14:41:13.474252 master-0 kubenswrapper[4409]: I1203 14:41:13.474175 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" event={"ID":"b1560eec-1871-4d57-ac22-c8f32d9ab53b","Type":"ContainerStarted","Data":"b778aef22c5fe3669cc66c3a2e4e3e2cd7c7845b35062028ddc0010d398d64a9"} Dec 03 14:41:13.477099 master-0 kubenswrapper[4409]: I1203 14:41:13.476771 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" event={"ID":"89240d4a-2ca3-4f9a-9817-52023d7e2880","Type":"ContainerStarted","Data":"43d79eeb7c306aad6dcf20258e612d97c49afa5c277e6f37684d75a41d505f57"} Dec 03 14:41:13.479230 master-0 kubenswrapper[4409]: I1203 14:41:13.479207 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" event={"ID":"e7b297c8-bfae-4d29-9698-9850cefb3c6a","Type":"ContainerStarted","Data":"7135eaa7b3f4182f0cf896faaae62c59f387fc600bace989d69d33b4df186082"} Dec 03 14:41:13.481152 master-0 kubenswrapper[4409]: I1203 14:41:13.481078 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" event={"ID":"48b8028a-5751-4806-bb6f-9ba67ff58d60","Type":"ContainerStarted","Data":"c5f188da70ff462fa0083d0483d2e153d5b7b8851fda2121cb809ac0750c8b8a"} Dec 03 14:41:13.482642 master-0 kubenswrapper[4409]: I1203 14:41:13.482586 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" event={"ID":"6d1f8e16-846e-455b-b7e6-670be5612c5c","Type":"ContainerStarted","Data":"e953e15456f5e8a1a71aec16cfbd53a66fe5270535a721a2e1bf97fe18bea711"} Dec 03 14:41:13.489702 master-0 kubenswrapper[4409]: I1203 14:41:13.488165 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" event={"ID":"9c1d3765-48c3-4f50-9b63-827eeefde7db","Type":"ContainerStarted","Data":"6aa476903ad452f57b023975dfd9dbe5b61bc434a71ab4f56f36b40434a5c52e"} Dec 03 14:41:13.490146 master-0 kubenswrapper[4409]: I1203 14:41:13.489718 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" event={"ID":"926503e0-c896-4b66-a633-80c3f986adc2","Type":"ContainerStarted","Data":"270aa8ace65af2126148705d1e66cac1f2fd51d5c766a5e353b5722ac5dba8fa"} Dec 03 14:41:13.491300 master-0 kubenswrapper[4409]: I1203 14:41:13.491275 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" event={"ID":"40f3ac32-c13b-4d65-96ca-08a77e2bd66a","Type":"ContainerStarted","Data":"6f3be21d7351c6faf2a28fea57da2d4559d9703d304835b537b50c8a5686ccba"} Dec 03 14:41:13.494748 master-0 kubenswrapper[4409]: I1203 14:41:13.494710 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" event={"ID":"a3d007f9-8a26-45da-8913-b6de85ecacbd","Type":"ContainerStarted","Data":"0179f2c74f6454dc7ee8ddc630a25da9c40a7ffdb5c6207d0c848a9c3033b331"} Dec 03 14:41:13.613190 master-0 kubenswrapper[4409]: I1203 14:41:13.613123 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:13.613433 master-0 kubenswrapper[4409]: E1203 14:41:13.613392 4409 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:13.613551 master-0 kubenswrapper[4409]: E1203 14:41:13.613521 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert podName:1cc58fc3-ce7e-459f-a9c9-27bf4e733e48 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:15.613491049 +0000 UTC m=+907.940553555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert") pod "infra-operator-controller-manager-7d9c9d7fd8-f4ttw" (UID: "1cc58fc3-ce7e-459f-a9c9-27bf4e733e48") : secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:13.684301 master-0 kubenswrapper[4409]: I1203 14:41:13.683694 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl"] Dec 03 14:41:13.692636 master-0 kubenswrapper[4409]: W1203 14:41:13.692487 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddae6346f_01f4_4206_a011_e2cb63760061.slice/crio-f54f44b11dc74fb43f078eda195226e8a62e44326d50783b3d8d4e0b2d4a0c74 WatchSource:0}: Error finding container f54f44b11dc74fb43f078eda195226e8a62e44326d50783b3d8d4e0b2d4a0c74: Status 404 returned error can't find the container with id f54f44b11dc74fb43f078eda195226e8a62e44326d50783b3d8d4e0b2d4a0c74 Dec 03 14:41:13.695054 master-0 kubenswrapper[4409]: I1203 14:41:13.694941 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd"] Dec 03 14:41:13.705564 master-0 kubenswrapper[4409]: I1203 14:41:13.704963 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx"] Dec 03 14:41:13.713139 master-0 kubenswrapper[4409]: I1203 14:41:13.713081 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9"] Dec 03 14:41:13.737682 master-0 kubenswrapper[4409]: W1203 14:41:13.737428 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c7ef074_1a45_491c_9872_76f6a6315362.slice/crio-9e0911ac0755f7d2b83beb8742b645dd4f9136348172b1c43362cc64c5fca5a8 WatchSource:0}: Error finding container 9e0911ac0755f7d2b83beb8742b645dd4f9136348172b1c43362cc64c5fca5a8: Status 404 returned error can't find the container with id 9e0911ac0755f7d2b83beb8742b645dd4f9136348172b1c43362cc64c5fca5a8 Dec 03 14:41:14.039090 master-0 kubenswrapper[4409]: I1203 14:41:14.035902 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:14.039090 master-0 kubenswrapper[4409]: E1203 14:41:14.036301 4409 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:14.039090 master-0 kubenswrapper[4409]: E1203 14:41:14.036448 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert podName:bc824ed3-cfbf-4219-816e-f4a7539359b0 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:16.036400226 +0000 UTC m=+908.363462732 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert") pod "openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" (UID: "bc824ed3-cfbf-4219-816e-f4a7539359b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:14.116299 master-0 kubenswrapper[4409]: I1203 14:41:14.102248 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7"] Dec 03 14:41:14.152388 master-0 kubenswrapper[4409]: I1203 14:41:14.148760 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp"] Dec 03 14:41:14.169158 master-0 kubenswrapper[4409]: W1203 14:41:14.164546 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5756531c_95ec_4fb0_a406_f0d3e64d8795.slice/crio-97a471aa49c65329e7b6b6cdb69dcb063b949b2175b1e3a1b0071a9ff79f44a0 WatchSource:0}: Error finding container 97a471aa49c65329e7b6b6cdb69dcb063b949b2175b1e3a1b0071a9ff79f44a0: Status 404 returned error can't find the container with id 97a471aa49c65329e7b6b6cdb69dcb063b949b2175b1e3a1b0071a9ff79f44a0 Dec 03 14:41:14.198031 master-0 kubenswrapper[4409]: I1203 14:41:14.176086 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4"] Dec 03 14:41:14.198031 master-0 kubenswrapper[4409]: I1203 14:41:14.189283 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr"] Dec 03 14:41:14.250045 master-0 kubenswrapper[4409]: I1203 14:41:14.238299 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-696b999796-zd6d2"] Dec 03 14:41:14.261476 master-0 kubenswrapper[4409]: I1203 14:41:14.261380 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt"] Dec 03 14:41:14.281027 master-0 kubenswrapper[4409]: W1203 14:41:14.275508 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b523887_d3de_40c7_ba4e_52eec568c0f0.slice/crio-12465294f32a2a91605c445f38b5262d4bb5f5e75b9c4dc6b041a32bc54dc0b3 WatchSource:0}: Error finding container 12465294f32a2a91605c445f38b5262d4bb5f5e75b9c4dc6b041a32bc54dc0b3: Status 404 returned error can't find the container with id 12465294f32a2a91605c445f38b5262d4bb5f5e75b9c4dc6b041a32bc54dc0b3 Dec 03 14:41:14.285330 master-0 kubenswrapper[4409]: E1203 14:41:14.284954 4409 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:50,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:30,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:10,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjnhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000810000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-78955d896f-bbcpt_openstack-operators(3b523887-d3de-40c7-ba4e-52eec568c0f0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 03 14:41:14.286221 master-0 kubenswrapper[4409]: E1203 14:41:14.286184 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" podUID="3b523887-d3de-40c7-ba4e-52eec568c0f0" Dec 03 14:41:14.343538 master-0 kubenswrapper[4409]: I1203 14:41:14.343442 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:14.343864 master-0 kubenswrapper[4409]: I1203 14:41:14.343639 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:14.343960 master-0 kubenswrapper[4409]: E1203 14:41:14.343938 4409 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 14:41:14.344153 master-0 kubenswrapper[4409]: E1203 14:41:14.344121 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:16.344084015 +0000 UTC m=+908.671146541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "metrics-server-cert" not found Dec 03 14:41:14.344339 master-0 kubenswrapper[4409]: E1203 14:41:14.344270 4409 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 14:41:14.344433 master-0 kubenswrapper[4409]: E1203 14:41:14.344414 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:16.344382934 +0000 UTC m=+908.671445630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "webhook-server-cert" not found Dec 03 14:41:14.511699 master-0 kubenswrapper[4409]: I1203 14:41:14.510951 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" event={"ID":"dae6346f-01f4-4206-a011-e2cb63760061","Type":"ContainerStarted","Data":"f54f44b11dc74fb43f078eda195226e8a62e44326d50783b3d8d4e0b2d4a0c74"} Dec 03 14:41:14.513818 master-0 kubenswrapper[4409]: I1203 14:41:14.513739 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" event={"ID":"5d773bda-73fe-4e57-aed9-fc593e93a67f","Type":"ContainerStarted","Data":"f8f6e5827a937ed44f5075f2a807600730c29b8487e1408da82a515966f3cc56"} Dec 03 14:41:14.517058 master-0 kubenswrapper[4409]: I1203 14:41:14.516990 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" event={"ID":"7bd1f9d6-8524-4eea-beb1-a4cedf159998","Type":"ContainerStarted","Data":"7506b23c59cbd2dbc447f5a8ff54718bba0dcbc86a227456070e58148aff2b87"} Dec 03 14:41:14.527142 master-0 kubenswrapper[4409]: I1203 14:41:14.526982 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" event={"ID":"5a439dbb-6458-4800-aade-ea490b402662","Type":"ContainerStarted","Data":"c82dc3831f428b0a985e365497c076f6a4788b105e9c71ed4011561509bc9bed"} Dec 03 14:41:14.530409 master-0 kubenswrapper[4409]: I1203 14:41:14.530346 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" event={"ID":"5756531c-95ec-4fb0-a406-f0d3e64d8795","Type":"ContainerStarted","Data":"97a471aa49c65329e7b6b6cdb69dcb063b949b2175b1e3a1b0071a9ff79f44a0"} Dec 03 14:41:14.542373 master-0 kubenswrapper[4409]: I1203 14:41:14.542277 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" event={"ID":"3c7ef074-1a45-491c-9872-76f6a6315362","Type":"ContainerStarted","Data":"9e0911ac0755f7d2b83beb8742b645dd4f9136348172b1c43362cc64c5fca5a8"} Dec 03 14:41:14.547833 master-0 kubenswrapper[4409]: I1203 14:41:14.547741 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" event={"ID":"e1c992ff-f2a7-4938-977e-fd4ab03bde82","Type":"ContainerStarted","Data":"87543996629c4c5a32795e879131f2f9a81738788e1fb2e44c8f232413294c5e"} Dec 03 14:41:14.552376 master-0 kubenswrapper[4409]: I1203 14:41:14.552306 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" event={"ID":"3b523887-d3de-40c7-ba4e-52eec568c0f0","Type":"ContainerStarted","Data":"12465294f32a2a91605c445f38b5262d4bb5f5e75b9c4dc6b041a32bc54dc0b3"} Dec 03 14:41:14.554779 master-0 kubenswrapper[4409]: E1203 14:41:14.554708 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" podUID="3b523887-d3de-40c7-ba4e-52eec568c0f0" Dec 03 14:41:14.559139 master-0 kubenswrapper[4409]: I1203 14:41:14.558501 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" event={"ID":"0ab1be43-6804-4cef-9727-2775f768c351","Type":"ContainerStarted","Data":"4abf6e909eac486e39a2a32eb77c3affaf1049cdc37e081e19feb25f688ea735"} Dec 03 14:41:14.563301 master-0 kubenswrapper[4409]: I1203 14:41:14.563180 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" event={"ID":"1176c675-89df-4778-94ed-14f77f9cc668","Type":"ContainerStarted","Data":"d83feefe3415ce4851c276ccbd80c379ab9de477cb96788d64ce6b100c1267ee"} Dec 03 14:41:15.584830 master-0 kubenswrapper[4409]: E1203 14:41:15.584581 4409 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" podUID="3b523887-d3de-40c7-ba4e-52eec568c0f0" Dec 03 14:41:15.674565 master-0 kubenswrapper[4409]: I1203 14:41:15.672955 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:15.674565 master-0 kubenswrapper[4409]: E1203 14:41:15.673228 4409 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:15.674565 master-0 kubenswrapper[4409]: E1203 14:41:15.673337 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert podName:1cc58fc3-ce7e-459f-a9c9-27bf4e733e48 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:19.673314387 +0000 UTC m=+912.000376883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert") pod "infra-operator-controller-manager-7d9c9d7fd8-f4ttw" (UID: "1cc58fc3-ce7e-459f-a9c9-27bf4e733e48") : secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:16.083658 master-0 kubenswrapper[4409]: I1203 14:41:16.083504 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:16.083915 master-0 kubenswrapper[4409]: E1203 14:41:16.083808 4409 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:16.083966 master-0 kubenswrapper[4409]: E1203 14:41:16.083927 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert podName:bc824ed3-cfbf-4219-816e-f4a7539359b0 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:20.083903825 +0000 UTC m=+912.410966331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert") pod "openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" (UID: "bc824ed3-cfbf-4219-816e-f4a7539359b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:16.389947 master-0 kubenswrapper[4409]: I1203 14:41:16.389758 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:16.390678 master-0 kubenswrapper[4409]: E1203 14:41:16.389930 4409 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 14:41:16.390821 master-0 kubenswrapper[4409]: E1203 14:41:16.390716 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:20.390687838 +0000 UTC m=+912.717750405 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "webhook-server-cert" not found Dec 03 14:41:16.391098 master-0 kubenswrapper[4409]: I1203 14:41:16.391066 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:16.391358 master-0 kubenswrapper[4409]: E1203 14:41:16.391184 4409 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 14:41:16.391479 master-0 kubenswrapper[4409]: E1203 14:41:16.391427 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:20.391397509 +0000 UTC m=+912.718460075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "metrics-server-cert" not found Dec 03 14:41:18.045553 master-0 kubenswrapper[4409]: I1203 14:41:18.044315 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:41:18.051464 master-0 kubenswrapper[4409]: I1203 14:41:18.051365 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.059482 master-0 kubenswrapper[4409]: I1203 14:41:18.059426 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:41:18.130042 master-0 kubenswrapper[4409]: I1203 14:41:18.129932 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvbr\" (UniqueName: \"kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.130300 master-0 kubenswrapper[4409]: I1203 14:41:18.130174 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.130439 master-0 kubenswrapper[4409]: I1203 14:41:18.130400 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.238748 master-0 kubenswrapper[4409]: I1203 14:41:18.238666 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvbr\" (UniqueName: \"kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.239050 master-0 kubenswrapper[4409]: I1203 14:41:18.238812 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.239050 master-0 kubenswrapper[4409]: I1203 14:41:18.238928 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.239852 master-0 kubenswrapper[4409]: I1203 14:41:18.239815 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.239945 master-0 kubenswrapper[4409]: I1203 14:41:18.239875 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.275915 master-0 kubenswrapper[4409]: I1203 14:41:18.275866 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvbr\" (UniqueName: \"kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr\") pod \"certified-operators-j57qg\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:18.376372 master-0 kubenswrapper[4409]: I1203 14:41:18.375198 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:19.673861 master-0 kubenswrapper[4409]: I1203 14:41:19.673788 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:19.674540 master-0 kubenswrapper[4409]: E1203 14:41:19.674037 4409 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:19.674540 master-0 kubenswrapper[4409]: E1203 14:41:19.674157 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert podName:1cc58fc3-ce7e-459f-a9c9-27bf4e733e48 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:27.674133101 +0000 UTC m=+920.001195607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert") pod "infra-operator-controller-manager-7d9c9d7fd8-f4ttw" (UID: "1cc58fc3-ce7e-459f-a9c9-27bf4e733e48") : secret "infra-operator-webhook-server-cert" not found Dec 03 14:41:20.184917 master-0 kubenswrapper[4409]: I1203 14:41:20.184839 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:20.185227 master-0 kubenswrapper[4409]: E1203 14:41:20.185088 4409 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:20.185368 master-0 kubenswrapper[4409]: E1203 14:41:20.185338 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert podName:bc824ed3-cfbf-4219-816e-f4a7539359b0 nodeName:}" failed. No retries permitted until 2025-12-03 14:41:28.185245722 +0000 UTC m=+920.512308238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert") pod "openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" (UID: "bc824ed3-cfbf-4219-816e-f4a7539359b0") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 03 14:41:20.490415 master-0 kubenswrapper[4409]: I1203 14:41:20.490297 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:20.490642 master-0 kubenswrapper[4409]: I1203 14:41:20.490499 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:20.490705 master-0 kubenswrapper[4409]: E1203 14:41:20.490653 4409 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 03 14:41:20.490705 master-0 kubenswrapper[4409]: E1203 14:41:20.490706 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:28.490689508 +0000 UTC m=+920.817752014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "metrics-server-cert" not found Dec 03 14:41:20.491083 master-0 kubenswrapper[4409]: E1203 14:41:20.491062 4409 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 03 14:41:20.491155 master-0 kubenswrapper[4409]: E1203 14:41:20.491101 4409 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs podName:a07832b5-fda1-4cac-acec-0354fbb8e91a nodeName:}" failed. No retries permitted until 2025-12-03 14:41:28.491090309 +0000 UTC m=+920.818152815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs") pod "openstack-operator-controller-manager-57d98476c4-856ml" (UID: "a07832b5-fda1-4cac-acec-0354fbb8e91a") : secret "webhook-server-cert" not found Dec 03 14:41:27.741679 master-0 kubenswrapper[4409]: I1203 14:41:27.741611 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:27.745161 master-0 kubenswrapper[4409]: I1203 14:41:27.744950 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1cc58fc3-ce7e-459f-a9c9-27bf4e733e48-cert\") pod \"infra-operator-controller-manager-7d9c9d7fd8-f4ttw\" (UID: \"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48\") " pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:27.788840 master-0 kubenswrapper[4409]: I1203 14:41:27.788724 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:28.254573 master-0 kubenswrapper[4409]: I1203 14:41:28.254413 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:28.257599 master-0 kubenswrapper[4409]: I1203 14:41:28.257514 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bc824ed3-cfbf-4219-816e-f4a7539359b0-cert\") pod \"openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5\" (UID: \"bc824ed3-cfbf-4219-816e-f4a7539359b0\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:28.427355 master-0 kubenswrapper[4409]: I1203 14:41:28.427299 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:28.560254 master-0 kubenswrapper[4409]: I1203 14:41:28.560183 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:28.560564 master-0 kubenswrapper[4409]: I1203 14:41:28.560355 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:28.563988 master-0 kubenswrapper[4409]: I1203 14:41:28.563949 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-metrics-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:28.564076 master-0 kubenswrapper[4409]: I1203 14:41:28.564056 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a07832b5-fda1-4cac-acec-0354fbb8e91a-webhook-certs\") pod \"openstack-operator-controller-manager-57d98476c4-856ml\" (UID: \"a07832b5-fda1-4cac-acec-0354fbb8e91a\") " pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:28.686942 master-0 kubenswrapper[4409]: I1203 14:41:28.686850 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:32.663406 master-0 kubenswrapper[4409]: I1203 14:41:32.663264 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw"] Dec 03 14:41:32.676805 master-0 kubenswrapper[4409]: I1203 14:41:32.676734 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:41:32.813949 master-0 kubenswrapper[4409]: I1203 14:41:32.813879 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" event={"ID":"9c1d3765-48c3-4f50-9b63-827eeefde7db","Type":"ContainerStarted","Data":"383addc7496bc3489858af1251b669efcee500a6fd045874bfb77e84679e8c4b"} Dec 03 14:41:32.815813 master-0 kubenswrapper[4409]: I1203 14:41:32.815769 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5"] Dec 03 14:41:32.816745 master-0 kubenswrapper[4409]: I1203 14:41:32.816710 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" event={"ID":"0ab1be43-6804-4cef-9727-2775f768c351","Type":"ContainerStarted","Data":"d2e4137db6884b28248fdb303275d40f64051370504b641eea9d89875ba18519"} Dec 03 14:41:32.823457 master-0 kubenswrapper[4409]: I1203 14:41:32.823421 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" event={"ID":"40f3ac32-c13b-4d65-96ca-08a77e2bd66a","Type":"ContainerStarted","Data":"e8aba80d067ae34e12beef182ba50acdc27b9ef0f836119b8de48d75871416a7"} Dec 03 14:41:32.824160 master-0 kubenswrapper[4409]: I1203 14:41:32.824138 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml"] Dec 03 14:41:32.825448 master-0 kubenswrapper[4409]: I1203 14:41:32.825419 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" event={"ID":"5a439dbb-6458-4800-aade-ea490b402662","Type":"ContainerStarted","Data":"8124decaade2f60d2edc2ff903e3ca3650b848bfefb8957ed32bda684b78144a"} Dec 03 14:41:32.827494 master-0 kubenswrapper[4409]: I1203 14:41:32.827439 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" event={"ID":"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b","Type":"ContainerStarted","Data":"0c6467e1fef511ca469e3651bbacdf7b5df881f8020611b0b016cfc7ef89aae0"} Dec 03 14:41:32.828781 master-0 kubenswrapper[4409]: I1203 14:41:32.828732 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" event={"ID":"a3d007f9-8a26-45da-8913-b6de85ecacbd","Type":"ContainerStarted","Data":"a8a3ac946fea1651cea3b6255287cdad29bcd502a3e24a5d636d6a7b699b9492"} Dec 03 14:41:32.831806 master-0 kubenswrapper[4409]: I1203 14:41:32.831606 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" event={"ID":"6d1f8e16-846e-455b-b7e6-670be5612c5c","Type":"ContainerStarted","Data":"15d30bedb6582ba6b65f2f3d88e2d813080569888c590a5c72cc1641779714c0"} Dec 03 14:41:34.137892 master-0 kubenswrapper[4409]: W1203 14:41:34.137656 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc824ed3_cfbf_4219_816e_f4a7539359b0.slice/crio-9821aa9fa6c52f3a901c844cd62c2e03f67f272f004f9200036e0a1ec2c074d1 WatchSource:0}: Error finding container 9821aa9fa6c52f3a901c844cd62c2e03f67f272f004f9200036e0a1ec2c074d1: Status 404 returned error can't find the container with id 9821aa9fa6c52f3a901c844cd62c2e03f67f272f004f9200036e0a1ec2c074d1 Dec 03 14:41:34.859900 master-0 kubenswrapper[4409]: I1203 14:41:34.859741 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" event={"ID":"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48","Type":"ContainerStarted","Data":"156e267850d186a5bec714cfa3f1ec0df6c057f596cd331b0f1970580268b0c0"} Dec 03 14:41:34.864078 master-0 kubenswrapper[4409]: I1203 14:41:34.863980 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" event={"ID":"5d773bda-73fe-4e57-aed9-fc593e93a67f","Type":"ContainerStarted","Data":"7a5382c2aaedb0d4aaa1a709a07ccd9dafd44fb1ba9c24a6f7725502784e8079"} Dec 03 14:41:34.868543 master-0 kubenswrapper[4409]: I1203 14:41:34.868483 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" event={"ID":"7bd1f9d6-8524-4eea-beb1-a4cedf159998","Type":"ContainerStarted","Data":"881063fcfc3c8cae7e3faa037e3597b534273256a190989d7c74bc9a44ac2ca3"} Dec 03 14:41:34.870077 master-0 kubenswrapper[4409]: I1203 14:41:34.870026 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerStarted","Data":"1b8f5b55413201d0abcd7303573259201ea9215fd66a4587245566204b0b5191"} Dec 03 14:41:34.872081 master-0 kubenswrapper[4409]: I1203 14:41:34.871331 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" event={"ID":"bc824ed3-cfbf-4219-816e-f4a7539359b0","Type":"ContainerStarted","Data":"9821aa9fa6c52f3a901c844cd62c2e03f67f272f004f9200036e0a1ec2c074d1"} Dec 03 14:41:34.874149 master-0 kubenswrapper[4409]: I1203 14:41:34.873074 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" event={"ID":"926503e0-c896-4b66-a633-80c3f986adc2","Type":"ContainerStarted","Data":"290c64b2bc58c55f3315b3f3620187037e12386c8e3930ae2e8923369039b583"} Dec 03 14:41:34.876931 master-0 kubenswrapper[4409]: I1203 14:41:34.876885 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" event={"ID":"a07832b5-fda1-4cac-acec-0354fbb8e91a","Type":"ContainerStarted","Data":"4817b7fa2f826186e66add5073495b40332470e10de6b808701b579c702e0f08"} Dec 03 14:41:37.796417 master-0 kubenswrapper[4409]: E1203 14:41:37.796126 4409 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde45da87_bb9d_4659_a765_72146b349c2d.slice/crio-32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4.scope\": RecentStats: unable to find data in memory cache]" Dec 03 14:41:37.798674 master-0 kubenswrapper[4409]: E1203 14:41:37.798584 4409 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde45da87_bb9d_4659_a765_72146b349c2d.slice/crio-conmon-32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde45da87_bb9d_4659_a765_72146b349c2d.slice/crio-32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4.scope\": RecentStats: unable to find data in memory cache]" Dec 03 14:41:38.036326 master-0 kubenswrapper[4409]: I1203 14:41:38.036258 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" event={"ID":"e7b297c8-bfae-4d29-9698-9850cefb3c6a","Type":"ContainerStarted","Data":"552491982ad9faa967cbe5fd5901dac0bc7616401263fc82a9e944b1bd1dc9a6"} Dec 03 14:41:38.038574 master-0 kubenswrapper[4409]: I1203 14:41:38.038540 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" event={"ID":"e1c992ff-f2a7-4938-977e-fd4ab03bde82","Type":"ContainerStarted","Data":"6b907cb89d0288cb14d2355e88e3930a54515ea56faf8b7ba0dcffad5ab064bc"} Dec 03 14:41:38.040270 master-0 kubenswrapper[4409]: I1203 14:41:38.040212 4409 generic.go:334] "Generic (PLEG): container finished" podID="de45da87-bb9d-4659-a765-72146b349c2d" containerID="32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4" exitCode=0 Dec 03 14:41:38.040375 master-0 kubenswrapper[4409]: I1203 14:41:38.040274 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerDied","Data":"32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4"} Dec 03 14:41:38.041464 master-0 kubenswrapper[4409]: I1203 14:41:38.041425 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" event={"ID":"89240d4a-2ca3-4f9a-9817-52023d7e2880","Type":"ContainerStarted","Data":"b12599367525dd4c14082899adbfb08c76ae6c762e514a8e5fe03063c30d2c4f"} Dec 03 14:41:38.043132 master-0 kubenswrapper[4409]: I1203 14:41:38.043058 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" event={"ID":"1176c675-89df-4778-94ed-14f77f9cc668","Type":"ContainerStarted","Data":"4b9c910ccd1ab2e62bcd73cafc1e2aaf668051e2a626e1fa5985e25c32f8eacf"} Dec 03 14:41:38.047886 master-0 kubenswrapper[4409]: I1203 14:41:38.047837 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" event={"ID":"a07832b5-fda1-4cac-acec-0354fbb8e91a","Type":"ContainerStarted","Data":"8e70792f02b6948f9518b44358ab400c362cf72450db62f1b3b2d022ad7c542c"} Dec 03 14:41:38.049381 master-0 kubenswrapper[4409]: I1203 14:41:38.049358 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:38.051522 master-0 kubenswrapper[4409]: I1203 14:41:38.051498 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" event={"ID":"5756531c-95ec-4fb0-a406-f0d3e64d8795","Type":"ContainerStarted","Data":"4eeea1a4db6372874658db14dfdf275bac5d5d5818e023184bb3d0589dbb88b4"} Dec 03 14:41:38.053174 master-0 kubenswrapper[4409]: I1203 14:41:38.053117 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" event={"ID":"3c7ef074-1a45-491c-9872-76f6a6315362","Type":"ContainerStarted","Data":"5dfc013805f92d8700a2dd2583e6d9ae22cdf1ce279f3e016b877907b93a75fb"} Dec 03 14:41:38.054423 master-0 kubenswrapper[4409]: I1203 14:41:38.054369 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" event={"ID":"dae6346f-01f4-4206-a011-e2cb63760061","Type":"ContainerStarted","Data":"41cbc0712b444941ffe78c3d0732201f23ebb1501e8646eafc10a9a3eeed1c01"} Dec 03 14:41:38.055779 master-0 kubenswrapper[4409]: I1203 14:41:38.055749 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" event={"ID":"b1560eec-1871-4d57-ac22-c8f32d9ab53b","Type":"ContainerStarted","Data":"9ffd6282cb4329bdd7632a171bbf444e204fe3c10ceae29ccf0332e08c7f5048"} Dec 03 14:41:39.071883 master-0 kubenswrapper[4409]: I1203 14:41:39.071762 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" event={"ID":"48b8028a-5751-4806-bb6f-9ba67ff58d60","Type":"ContainerStarted","Data":"91e0f301ffd3b998bd8dd468a91fe24abf1989318adac0246435f5885ee9c24d"} Dec 03 14:41:40.805033 master-0 kubenswrapper[4409]: I1203 14:41:40.802739 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:41:40.818066 master-0 kubenswrapper[4409]: I1203 14:41:40.817837 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:40.865329 master-0 kubenswrapper[4409]: I1203 14:41:40.863094 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:41:41.007177 master-0 kubenswrapper[4409]: I1203 14:41:41.004045 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" podStartSLOduration=30.003969401 podStartE2EDuration="30.003969401s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-03 14:41:40.988491134 +0000 UTC m=+933.315553660" watchObservedRunningTime="2025-12-03 14:41:41.003969401 +0000 UTC m=+933.331031907" Dec 03 14:41:41.007907 master-0 kubenswrapper[4409]: I1203 14:41:41.007829 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.008200 master-0 kubenswrapper[4409]: I1203 14:41:41.008162 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqdt\" (UniqueName: \"kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.008308 master-0 kubenswrapper[4409]: I1203 14:41:41.008242 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.110220 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.110322 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqdt\" (UniqueName: \"kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.110343 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.110962 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.111506 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.119088 master-0 kubenswrapper[4409]: I1203 14:41:41.112001 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" event={"ID":"3b523887-d3de-40c7-ba4e-52eec568c0f0","Type":"ContainerStarted","Data":"a1588449b3cf624909b6dc90ec40dc1503cf91d3fec5cf6ed604e73937b7c54b"} Dec 03 14:41:41.147866 master-0 kubenswrapper[4409]: I1203 14:41:41.147554 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqdt\" (UniqueName: \"kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt\") pod \"community-operators-qc57x\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:41.170033 master-0 kubenswrapper[4409]: I1203 14:41:41.159691 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-bbcpt" podStartSLOduration=2.87202059 podStartE2EDuration="29.159656852s" podCreationTimestamp="2025-12-03 14:41:12 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.284671155 +0000 UTC m=+906.611733661" lastFinishedPulling="2025-12-03 14:41:40.572307407 +0000 UTC m=+932.899369923" observedRunningTime="2025-12-03 14:41:41.130812087 +0000 UTC m=+933.457874603" watchObservedRunningTime="2025-12-03 14:41:41.159656852 +0000 UTC m=+933.486719358" Dec 03 14:41:41.201026 master-0 kubenswrapper[4409]: I1203 14:41:41.189388 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:41:42.378998 master-0 kubenswrapper[4409]: I1203 14:41:42.378929 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:41:43.294187 master-0 kubenswrapper[4409]: I1203 14:41:43.294100 4409 generic.go:334] "Generic (PLEG): container finished" podID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerID="90bec9e5b1ab207a8c43d1628b2d22404a6c23ee24a2867a3987f61dcbc937fd" exitCode=0 Dec 03 14:41:43.294600 master-0 kubenswrapper[4409]: I1203 14:41:43.294209 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerDied","Data":"90bec9e5b1ab207a8c43d1628b2d22404a6c23ee24a2867a3987f61dcbc937fd"} Dec 03 14:41:43.294600 master-0 kubenswrapper[4409]: I1203 14:41:43.294296 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerStarted","Data":"2da9f4894535602085caa72ae4983db1e2d5fd1ead5299115a7a495c7602e9be"} Dec 03 14:41:43.300767 master-0 kubenswrapper[4409]: I1203 14:41:43.300719 4409 generic.go:334] "Generic (PLEG): container finished" podID="de45da87-bb9d-4659-a765-72146b349c2d" containerID="aa68f81b105e42c2e8a1ba976cb81908e42d1816c4baba6bcc38d01966a01f99" exitCode=0 Dec 03 14:41:43.300864 master-0 kubenswrapper[4409]: I1203 14:41:43.300772 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerDied","Data":"aa68f81b105e42c2e8a1ba976cb81908e42d1816c4baba6bcc38d01966a01f99"} Dec 03 14:41:47.459578 master-0 kubenswrapper[4409]: I1203 14:41:47.435027 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:41:47.459578 master-0 kubenswrapper[4409]: I1203 14:41:47.437796 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.573883 master-0 kubenswrapper[4409]: I1203 14:41:47.573571 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:41:47.655050 master-0 kubenswrapper[4409]: I1203 14:41:47.654601 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.655050 master-0 kubenswrapper[4409]: I1203 14:41:47.654758 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lch\" (UniqueName: \"kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.655050 master-0 kubenswrapper[4409]: I1203 14:41:47.654789 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.757241 master-0 kubenswrapper[4409]: I1203 14:41:47.757179 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.757395 master-0 kubenswrapper[4409]: I1203 14:41:47.757373 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lch\" (UniqueName: \"kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.757452 master-0 kubenswrapper[4409]: I1203 14:41:47.757400 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.758064 master-0 kubenswrapper[4409]: I1203 14:41:47.758043 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.758306 master-0 kubenswrapper[4409]: I1203 14:41:47.758272 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.781589 master-0 kubenswrapper[4409]: I1203 14:41:47.781534 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lch\" (UniqueName: \"kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch\") pod \"redhat-marketplace-tpxmq\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:47.841104 master-0 kubenswrapper[4409]: I1203 14:41:47.840493 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:48.767079 master-0 kubenswrapper[4409]: I1203 14:41:48.766973 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-57d98476c4-856ml" Dec 03 14:41:48.798719 master-0 kubenswrapper[4409]: I1203 14:41:48.798658 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:41:49.615089 master-0 kubenswrapper[4409]: I1203 14:41:49.609319 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" event={"ID":"bc824ed3-cfbf-4219-816e-f4a7539359b0","Type":"ContainerStarted","Data":"a235f752b16ced59862ecf5bbfe7006c49a59f84514b7b9d2fe84be64f6001ec"} Dec 03 14:41:49.619103 master-0 kubenswrapper[4409]: I1203 14:41:49.617667 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" event={"ID":"9c1d3765-48c3-4f50-9b63-827eeefde7db","Type":"ContainerStarted","Data":"378efa9b49977e83822bf4cb48ba26b3e8cd3c9d3aac48449b2fc7ae1d0da4d5"} Dec 03 14:41:49.619229 master-0 kubenswrapper[4409]: I1203 14:41:49.619140 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:49.623608 master-0 kubenswrapper[4409]: I1203 14:41:49.622829 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerStarted","Data":"47f8dbe11f500bd2d752669fa6134e4a5c55b17b5b96e561d825f9e65d3f46e0"} Dec 03 14:41:49.624734 master-0 kubenswrapper[4409]: I1203 14:41:49.624683 4409 generic.go:334] "Generic (PLEG): container finished" podID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerID="11e80892a6689e8b3eb694e96b72edfe937d7389098ec22b346845719eab68ea" exitCode=0 Dec 03 14:41:49.624831 master-0 kubenswrapper[4409]: I1203 14:41:49.624756 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerDied","Data":"11e80892a6689e8b3eb694e96b72edfe937d7389098ec22b346845719eab68ea"} Dec 03 14:41:49.625735 master-0 kubenswrapper[4409]: I1203 14:41:49.625684 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" Dec 03 14:41:49.629086 master-0 kubenswrapper[4409]: I1203 14:41:49.629018 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" event={"ID":"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48","Type":"ContainerStarted","Data":"86efe5744a9e7d97379b4458b4c08e7bedfc7ac86b2695ef34c6a8ae8ef9bb0e"} Dec 03 14:41:49.634917 master-0 kubenswrapper[4409]: I1203 14:41:49.634845 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerStarted","Data":"a909e9b9ea344a39898ef3c764f703c1517d75f12132444891c63166f6b2e355"} Dec 03 14:41:49.641849 master-0 kubenswrapper[4409]: I1203 14:41:49.641767 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" event={"ID":"926503e0-c896-4b66-a633-80c3f986adc2","Type":"ContainerStarted","Data":"5699ac71662137c81d06620137ce55dce279fd75ee99b6b10bad670718feaca9"} Dec 03 14:41:49.642433 master-0 kubenswrapper[4409]: I1203 14:41:49.642367 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:49.655694 master-0 kubenswrapper[4409]: I1203 14:41:49.651734 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" Dec 03 14:41:49.670535 master-0 kubenswrapper[4409]: I1203 14:41:49.663814 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" event={"ID":"89240d4a-2ca3-4f9a-9817-52023d7e2880","Type":"ContainerStarted","Data":"cc71b66d0c03820bc0070a123ac7cf5bc45bd7a2bddf856d5cc9b8ce9db432bf"} Dec 03 14:41:49.670535 master-0 kubenswrapper[4409]: I1203 14:41:49.665346 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:49.680132 master-0 kubenswrapper[4409]: I1203 14:41:49.671858 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" Dec 03 14:41:49.680132 master-0 kubenswrapper[4409]: I1203 14:41:49.672122 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" event={"ID":"7bd1f9d6-8524-4eea-beb1-a4cedf159998","Type":"ContainerStarted","Data":"ad3198d14be459b6c005a4421428dce97695ad42a5e81b12124cf82c7a76134f"} Dec 03 14:41:49.680132 master-0 kubenswrapper[4409]: I1203 14:41:49.673214 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:49.680132 master-0 kubenswrapper[4409]: I1203 14:41:49.675485 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" Dec 03 14:41:49.707113 master-0 kubenswrapper[4409]: I1203 14:41:49.699953 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-7fd96594c7-24pft" podStartSLOduration=3.501097018 podStartE2EDuration="38.699834834s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.918236532 +0000 UTC m=+905.245299038" lastFinishedPulling="2025-12-03 14:41:48.116974348 +0000 UTC m=+940.444036854" observedRunningTime="2025-12-03 14:41:49.645877202 +0000 UTC m=+941.972939728" watchObservedRunningTime="2025-12-03 14:41:49.699834834 +0000 UTC m=+942.026897360" Dec 03 14:41:49.741621 master-0 kubenswrapper[4409]: I1203 14:41:49.740876 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-f8856dd79-rbqv4" podStartSLOduration=3.234127268 podStartE2EDuration="38.738778683s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.597331609 +0000 UTC m=+904.924394115" lastFinishedPulling="2025-12-03 14:41:48.101983024 +0000 UTC m=+940.429045530" observedRunningTime="2025-12-03 14:41:49.668360416 +0000 UTC m=+941.995422922" watchObservedRunningTime="2025-12-03 14:41:49.738778683 +0000 UTC m=+942.065841479" Dec 03 14:41:49.841461 master-0 kubenswrapper[4409]: I1203 14:41:49.841267 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j57qg" podStartSLOduration=22.923248541 podStartE2EDuration="32.841235374s" podCreationTimestamp="2025-12-03 14:41:17 +0000 UTC" firstStartedPulling="2025-12-03 14:41:38.042653635 +0000 UTC m=+930.369716141" lastFinishedPulling="2025-12-03 14:41:47.960640468 +0000 UTC m=+940.287702974" observedRunningTime="2025-12-03 14:41:49.792576571 +0000 UTC m=+942.119639077" watchObservedRunningTime="2025-12-03 14:41:49.841235374 +0000 UTC m=+942.168297880" Dec 03 14:41:49.865702 master-0 kubenswrapper[4409]: I1203 14:41:49.865547 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-57dfcdd5b8-vmnjr" podStartSLOduration=4.898051775 podStartE2EDuration="38.865508839s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.149224866 +0000 UTC m=+906.476287372" lastFinishedPulling="2025-12-03 14:41:48.11668193 +0000 UTC m=+940.443744436" observedRunningTime="2025-12-03 14:41:49.820045626 +0000 UTC m=+942.147108132" watchObservedRunningTime="2025-12-03 14:41:49.865508839 +0000 UTC m=+942.192571345" Dec 03 14:41:49.914155 master-0 kubenswrapper[4409]: I1203 14:41:49.909600 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84bc9f68f5-s8bzw" podStartSLOduration=3.47860971 podStartE2EDuration="38.909575872s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.685938014 +0000 UTC m=+905.013000530" lastFinishedPulling="2025-12-03 14:41:48.116904176 +0000 UTC m=+940.443966692" observedRunningTime="2025-12-03 14:41:49.866325782 +0000 UTC m=+942.193388288" watchObservedRunningTime="2025-12-03 14:41:49.909575872 +0000 UTC m=+942.236638368" Dec 03 14:41:50.949032 master-0 kubenswrapper[4409]: I1203 14:41:50.946824 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" event={"ID":"5756531c-95ec-4fb0-a406-f0d3e64d8795","Type":"ContainerStarted","Data":"d22077f172141dec773aa4aa393a5d7661f05f4dc05b092900044248ca604d63"} Dec 03 14:41:50.949032 master-0 kubenswrapper[4409]: I1203 14:41:50.947823 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:50.949651 master-0 kubenswrapper[4409]: I1203 14:41:50.949584 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" Dec 03 14:41:50.950810 master-0 kubenswrapper[4409]: I1203 14:41:50.949696 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" event={"ID":"1cc58fc3-ce7e-459f-a9c9-27bf4e733e48","Type":"ContainerStarted","Data":"cc8e3b4ab4ac5dc445681bb103c03daaf35280a3cad3412c8306878c22d299bc"} Dec 03 14:41:50.950810 master-0 kubenswrapper[4409]: I1203 14:41:50.950494 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:50.965331 master-0 kubenswrapper[4409]: I1203 14:41:50.962225 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" event={"ID":"bc824ed3-cfbf-4219-816e-f4a7539359b0","Type":"ContainerStarted","Data":"4935c617b30f22cdcb6bbd7e76be052e9d1ce19086b6fa3f21a8751186f311d7"} Dec 03 14:41:50.965331 master-0 kubenswrapper[4409]: I1203 14:41:50.963539 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:50.974207 master-0 kubenswrapper[4409]: I1203 14:41:50.974100 4409 generic.go:334] "Generic (PLEG): container finished" podID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerID="630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524" exitCode=0 Dec 03 14:41:50.974719 master-0 kubenswrapper[4409]: I1203 14:41:50.974238 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerDied","Data":"630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524"} Dec 03 14:41:50.976817 master-0 kubenswrapper[4409]: I1203 14:41:50.975693 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-b7fd4" podStartSLOduration=5.328053549 podStartE2EDuration="39.975665173s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.211667091 +0000 UTC m=+906.538729587" lastFinishedPulling="2025-12-03 14:41:48.859278705 +0000 UTC m=+941.186341211" observedRunningTime="2025-12-03 14:41:50.973599314 +0000 UTC m=+943.300661840" watchObservedRunningTime="2025-12-03 14:41:50.975665173 +0000 UTC m=+943.302727679" Dec 03 14:41:51.014360 master-0 kubenswrapper[4409]: I1203 14:41:51.010317 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" event={"ID":"b1560eec-1871-4d57-ac22-c8f32d9ab53b","Type":"ContainerStarted","Data":"2b0d442ca45ccf848ff5baa23a963ebe100876658568e715ff64009a50a7205f"} Dec 03 14:41:51.014360 master-0 kubenswrapper[4409]: I1203 14:41:51.012599 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:51.018870 master-0 kubenswrapper[4409]: I1203 14:41:51.018799 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" Dec 03 14:41:51.020859 master-0 kubenswrapper[4409]: I1203 14:41:51.020772 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" event={"ID":"40f3ac32-c13b-4d65-96ca-08a77e2bd66a","Type":"ContainerStarted","Data":"40a16a506d2219f2040f61148bcff6986d5373b2cdcd91b86a77b1a49e5a3b8e"} Dec 03 14:41:51.021586 master-0 kubenswrapper[4409]: I1203 14:41:51.021560 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:51.029465 master-0 kubenswrapper[4409]: I1203 14:41:51.029351 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" podStartSLOduration=27.736264664 podStartE2EDuration="40.029316676s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:34.139554823 +0000 UTC m=+926.466617329" lastFinishedPulling="2025-12-03 14:41:46.432606835 +0000 UTC m=+938.759669341" observedRunningTime="2025-12-03 14:41:51.012580794 +0000 UTC m=+943.339643320" watchObservedRunningTime="2025-12-03 14:41:51.029316676 +0000 UTC m=+943.356379182" Dec 03 14:41:51.044427 master-0 kubenswrapper[4409]: I1203 14:41:51.044334 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" podStartSLOduration=26.23122905 podStartE2EDuration="40.044311559s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:34.172841994 +0000 UTC m=+926.499904490" lastFinishedPulling="2025-12-03 14:41:47.985924473 +0000 UTC m=+940.312986999" observedRunningTime="2025-12-03 14:41:51.041935752 +0000 UTC m=+943.368998268" watchObservedRunningTime="2025-12-03 14:41:51.044311559 +0000 UTC m=+943.371374075" Dec 03 14:41:51.052246 master-0 kubenswrapper[4409]: I1203 14:41:51.046489 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" Dec 03 14:41:51.062198 master-0 kubenswrapper[4409]: I1203 14:41:51.057346 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" event={"ID":"3c7ef074-1a45-491c-9872-76f6a6315362","Type":"ContainerStarted","Data":"f9a3383a509464aac7ec46fb652514c369269d8d2f0b4c2135cf88dba6db1b5a"} Dec 03 14:41:51.062198 master-0 kubenswrapper[4409]: I1203 14:41:51.057979 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:51.070789 master-0 kubenswrapper[4409]: I1203 14:41:51.070681 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" Dec 03 14:41:51.082261 master-0 kubenswrapper[4409]: I1203 14:41:51.080553 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" event={"ID":"5d773bda-73fe-4e57-aed9-fc593e93a67f","Type":"ContainerStarted","Data":"76f34ea810b40bac7c3c1b7d08b1bdf78a969b7b331fdfa51f4cf10cff15b95d"} Dec 03 14:41:51.082261 master-0 kubenswrapper[4409]: I1203 14:41:51.081303 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:51.084811 master-0 kubenswrapper[4409]: I1203 14:41:51.084747 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" Dec 03 14:41:51.129676 master-0 kubenswrapper[4409]: I1203 14:41:51.129436 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-647d75769b-l99kn" podStartSLOduration=4.928474123 podStartE2EDuration="40.12939418s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.27505423 +0000 UTC m=+905.602116736" lastFinishedPulling="2025-12-03 14:41:48.475974287 +0000 UTC m=+940.803036793" observedRunningTime="2025-12-03 14:41:51.110099206 +0000 UTC m=+943.437161712" watchObservedRunningTime="2025-12-03 14:41:51.12939418 +0000 UTC m=+943.456456686" Dec 03 14:41:51.130132 master-0 kubenswrapper[4409]: I1203 14:41:51.130055 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" event={"ID":"6d1f8e16-846e-455b-b7e6-670be5612c5c","Type":"ContainerStarted","Data":"1349327906a99062e9709d65a514ee9482a5e431167db1afd8c0bb3ba77e891c"} Dec 03 14:41:51.130245 master-0 kubenswrapper[4409]: I1203 14:41:51.130173 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:51.133340 master-0 kubenswrapper[4409]: I1203 14:41:51.132833 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" Dec 03 14:41:51.150142 master-0 kubenswrapper[4409]: I1203 14:41:51.149894 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" event={"ID":"0ab1be43-6804-4cef-9727-2775f768c351","Type":"ContainerStarted","Data":"4f7444406881b7df987e53b6f0f53386e3679e934a474bb83454f405ba47d2ed"} Dec 03 14:41:51.150965 master-0 kubenswrapper[4409]: I1203 14:41:51.150646 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:51.166145 master-0 kubenswrapper[4409]: I1203 14:41:51.152417 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" Dec 03 14:41:51.189087 master-0 kubenswrapper[4409]: I1203 14:41:51.183723 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" event={"ID":"1176c675-89df-4778-94ed-14f77f9cc668","Type":"ContainerStarted","Data":"7b2b71cefe9c046f02eeffd3e865cf448e2015cef5024c0e7869682c914cc26b"} Dec 03 14:41:51.189087 master-0 kubenswrapper[4409]: I1203 14:41:51.183785 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:51.199560 master-0 kubenswrapper[4409]: I1203 14:41:51.199327 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" Dec 03 14:41:51.201781 master-0 kubenswrapper[4409]: I1203 14:41:51.201263 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-647f96877-kf2cl" podStartSLOduration=5.353853785 podStartE2EDuration="40.201239677s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.742925668 +0000 UTC m=+906.069988174" lastFinishedPulling="2025-12-03 14:41:48.59031156 +0000 UTC m=+940.917374066" observedRunningTime="2025-12-03 14:41:51.140155024 +0000 UTC m=+943.467217540" watchObservedRunningTime="2025-12-03 14:41:51.201239677 +0000 UTC m=+943.528302183" Dec 03 14:41:51.204899 master-0 kubenswrapper[4409]: I1203 14:41:51.204845 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78cd4f7769-lmlf9" podStartSLOduration=4.4167417 podStartE2EDuration="40.204834029s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.861027174 +0000 UTC m=+905.188089680" lastFinishedPulling="2025-12-03 14:41:48.649119503 +0000 UTC m=+940.976182009" observedRunningTime="2025-12-03 14:41:51.168424571 +0000 UTC m=+943.495487077" watchObservedRunningTime="2025-12-03 14:41:51.204834029 +0000 UTC m=+943.531896535" Dec 03 14:41:51.263169 master-0 kubenswrapper[4409]: I1203 14:41:51.262850 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6b64f6f645-zgkn7" podStartSLOduration=5.301845552 podStartE2EDuration="40.262821125s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.13026174 +0000 UTC m=+906.457324256" lastFinishedPulling="2025-12-03 14:41:49.091237323 +0000 UTC m=+941.418299829" observedRunningTime="2025-12-03 14:41:51.252917135 +0000 UTC m=+943.579979661" watchObservedRunningTime="2025-12-03 14:41:51.262821125 +0000 UTC m=+943.589883631" Dec 03 14:41:51.320108 master-0 kubenswrapper[4409]: I1203 14:41:51.317990 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-d2kpd" podStartSLOduration=5.447688291 podStartE2EDuration="40.31794899s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.718168728 +0000 UTC m=+906.045231234" lastFinishedPulling="2025-12-03 14:41:48.588429427 +0000 UTC m=+940.915491933" observedRunningTime="2025-12-03 14:41:51.310469779 +0000 UTC m=+943.637532315" watchObservedRunningTime="2025-12-03 14:41:51.31794899 +0000 UTC m=+943.645011496" Dec 03 14:41:51.379473 master-0 kubenswrapper[4409]: I1203 14:41:51.379348 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b9b669fdb-tvkgp" podStartSLOduration=5.985657749 podStartE2EDuration="40.379310092s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.087440319 +0000 UTC m=+906.414502825" lastFinishedPulling="2025-12-03 14:41:48.481092662 +0000 UTC m=+940.808155168" observedRunningTime="2025-12-03 14:41:51.358313069 +0000 UTC m=+943.685375585" watchObservedRunningTime="2025-12-03 14:41:51.379310092 +0000 UTC m=+943.706372598" Dec 03 14:41:51.411783 master-0 kubenswrapper[4409]: I1203 14:41:51.410643 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-56f9fbf74b-p2kpr" podStartSLOduration=5.070002938 podStartE2EDuration="40.410593554s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.43207971 +0000 UTC m=+905.759142216" lastFinishedPulling="2025-12-03 14:41:48.772670326 +0000 UTC m=+941.099732832" observedRunningTime="2025-12-03 14:41:51.408263309 +0000 UTC m=+943.735325825" watchObservedRunningTime="2025-12-03 14:41:51.410593554 +0000 UTC m=+943.737656060" Dec 03 14:41:52.197744 master-0 kubenswrapper[4409]: I1203 14:41:52.197687 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" event={"ID":"48b8028a-5751-4806-bb6f-9ba67ff58d60","Type":"ContainerStarted","Data":"533be284232386dcd1da79c2185cf270e6483e94e8319cc8397e59e6a1739f98"} Dec 03 14:41:52.198490 master-0 kubenswrapper[4409]: I1203 14:41:52.198033 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:52.201585 master-0 kubenswrapper[4409]: I1203 14:41:52.201555 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" Dec 03 14:41:52.201677 master-0 kubenswrapper[4409]: I1203 14:41:52.201661 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" event={"ID":"a3d007f9-8a26-45da-8913-b6de85ecacbd","Type":"ContainerStarted","Data":"e926a531087ad1dc5a64b4694fed50b78ca98be3af5cdce89bfb37d7d35b4eba"} Dec 03 14:41:52.202553 master-0 kubenswrapper[4409]: I1203 14:41:52.202487 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:52.204522 master-0 kubenswrapper[4409]: I1203 14:41:52.204458 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" Dec 03 14:41:52.204942 master-0 kubenswrapper[4409]: I1203 14:41:52.204915 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" event={"ID":"e9e336a0-16cc-49a9-9ce9-ca34aaa46b3b","Type":"ContainerStarted","Data":"7ca9963ab3dac9712f5df25c37ca97d1fc1d94353d6159c12e203d43cffc0fb3"} Dec 03 14:41:52.205458 master-0 kubenswrapper[4409]: I1203 14:41:52.205442 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:52.207222 master-0 kubenswrapper[4409]: I1203 14:41:52.207186 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" event={"ID":"dae6346f-01f4-4206-a011-e2cb63760061","Type":"ContainerStarted","Data":"98430eac68d594659c9e6b864e559e445fc33481b0011b23fcf7f5bbc4aa7f15"} Dec 03 14:41:52.207989 master-0 kubenswrapper[4409]: I1203 14:41:52.207944 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:52.209770 master-0 kubenswrapper[4409]: I1203 14:41:52.209747 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerStarted","Data":"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516"} Dec 03 14:41:52.211301 master-0 kubenswrapper[4409]: I1203 14:41:52.210942 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" Dec 03 14:41:52.212228 master-0 kubenswrapper[4409]: I1203 14:41:52.212206 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerStarted","Data":"241e65a3479cf1b72b5a8381496dce0d53bddde3d16102150df4ae9f7fe40874"} Dec 03 14:41:52.217227 master-0 kubenswrapper[4409]: I1203 14:41:52.217177 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" event={"ID":"5a439dbb-6458-4800-aade-ea490b402662","Type":"ContainerStarted","Data":"597a11c604d9dce37c2b2b28d2f30042f4cc197de404fb5092debee60848fb88"} Dec 03 14:41:52.218316 master-0 kubenswrapper[4409]: I1203 14:41:52.218246 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:52.220116 master-0 kubenswrapper[4409]: I1203 14:41:52.220086 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" event={"ID":"e7b297c8-bfae-4d29-9698-9850cefb3c6a","Type":"ContainerStarted","Data":"1b2dcab23977e9f12494f7a855eb3379ffcab89aff17a929b66b38e3cf4ff6a8"} Dec 03 14:41:52.221117 master-0 kubenswrapper[4409]: I1203 14:41:52.220837 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:52.221117 master-0 kubenswrapper[4409]: I1203 14:41:52.221107 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" Dec 03 14:41:52.226803 master-0 kubenswrapper[4409]: I1203 14:41:52.223936 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" Dec 03 14:41:52.227040 master-0 kubenswrapper[4409]: I1203 14:41:52.226940 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" event={"ID":"e1c992ff-f2a7-4938-977e-fd4ab03bde82","Type":"ContainerStarted","Data":"0539188e05214a9cf49613a1285b273a0766a3afedf0bdc3983c260bdc18fa40"} Dec 03 14:41:52.227040 master-0 kubenswrapper[4409]: I1203 14:41:52.226976 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:52.235216 master-0 kubenswrapper[4409]: I1203 14:41:52.235077 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" Dec 03 14:41:52.235484 master-0 kubenswrapper[4409]: I1203 14:41:52.235461 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" Dec 03 14:41:52.592439 master-0 kubenswrapper[4409]: I1203 14:41:52.592338 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-7c9bfd6967-s2sbx" podStartSLOduration=4.919341702 podStartE2EDuration="41.592308426s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.98008466 +0000 UTC m=+905.307147166" lastFinishedPulling="2025-12-03 14:41:49.653051374 +0000 UTC m=+941.980113890" observedRunningTime="2025-12-03 14:41:52.546404971 +0000 UTC m=+944.873467477" watchObservedRunningTime="2025-12-03 14:41:52.592308426 +0000 UTC m=+944.919370932" Dec 03 14:41:52.624590 master-0 kubenswrapper[4409]: I1203 14:41:52.624471 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5cd89994b5-2gn4f" podStartSLOduration=4.8105812740000005 podStartE2EDuration="41.624431603s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.397840299 +0000 UTC m=+904.724902805" lastFinishedPulling="2025-12-03 14:41:49.211690628 +0000 UTC m=+941.538753134" observedRunningTime="2025-12-03 14:41:52.592591464 +0000 UTC m=+944.919654000" watchObservedRunningTime="2025-12-03 14:41:52.624431603 +0000 UTC m=+944.951494109" Dec 03 14:41:52.683690 master-0 kubenswrapper[4409]: I1203 14:41:52.673082 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-845b79dc4f-cj7z9" podStartSLOduration=6.081835302 podStartE2EDuration="41.673031934s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.716212973 +0000 UTC m=+906.043275479" lastFinishedPulling="2025-12-03 14:41:49.307409595 +0000 UTC m=+941.634472111" observedRunningTime="2025-12-03 14:41:52.620712128 +0000 UTC m=+944.947774634" watchObservedRunningTime="2025-12-03 14:41:52.673031934 +0000 UTC m=+945.000094450" Dec 03 14:41:52.697029 master-0 kubenswrapper[4409]: I1203 14:41:52.685782 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-f6cc97788-v9zzp" podStartSLOduration=4.657937253 podStartE2EDuration="41.685759153s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:12.904907805 +0000 UTC m=+905.231970311" lastFinishedPulling="2025-12-03 14:41:49.932729715 +0000 UTC m=+942.259792211" observedRunningTime="2025-12-03 14:41:52.651156757 +0000 UTC m=+944.978219273" watchObservedRunningTime="2025-12-03 14:41:52.685759153 +0000 UTC m=+945.012821669" Dec 03 14:41:52.708174 master-0 kubenswrapper[4409]: I1203 14:41:52.708072 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-865fc86d5b-pk8dx" podStartSLOduration=5.668676977 podStartE2EDuration="41.708045062s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.696741603 +0000 UTC m=+906.023804109" lastFinishedPulling="2025-12-03 14:41:49.736109688 +0000 UTC m=+942.063172194" observedRunningTime="2025-12-03 14:41:52.681806272 +0000 UTC m=+945.008868788" watchObservedRunningTime="2025-12-03 14:41:52.708045062 +0000 UTC m=+945.035107568" Dec 03 14:41:52.744179 master-0 kubenswrapper[4409]: I1203 14:41:52.738456 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qc57x" podStartSLOduration=5.901561945 podStartE2EDuration="12.73843389s" podCreationTimestamp="2025-12-03 14:41:40 +0000 UTC" firstStartedPulling="2025-12-03 14:41:43.360914569 +0000 UTC m=+935.687977075" lastFinishedPulling="2025-12-03 14:41:50.197786514 +0000 UTC m=+942.524849020" observedRunningTime="2025-12-03 14:41:52.726815912 +0000 UTC m=+945.053878418" watchObservedRunningTime="2025-12-03 14:41:52.73843389 +0000 UTC m=+945.065496396" Dec 03 14:41:52.791053 master-0 kubenswrapper[4409]: I1203 14:41:52.782544 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-crqhq" podStartSLOduration=5.187655499 podStartE2EDuration="41.782524664s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:13.41475931 +0000 UTC m=+905.741821816" lastFinishedPulling="2025-12-03 14:41:50.009628475 +0000 UTC m=+942.336690981" observedRunningTime="2025-12-03 14:41:52.779861578 +0000 UTC m=+945.106924094" watchObservedRunningTime="2025-12-03 14:41:52.782524664 +0000 UTC m=+945.109587170" Dec 03 14:41:52.824104 master-0 kubenswrapper[4409]: I1203 14:41:52.822671 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-696b999796-zd6d2" podStartSLOduration=6.807022667 podStartE2EDuration="41.822624495s" podCreationTimestamp="2025-12-03 14:41:11 +0000 UTC" firstStartedPulling="2025-12-03 14:41:14.193622731 +0000 UTC m=+906.520685237" lastFinishedPulling="2025-12-03 14:41:49.209224559 +0000 UTC m=+941.536287065" observedRunningTime="2025-12-03 14:41:52.805274665 +0000 UTC m=+945.132337191" watchObservedRunningTime="2025-12-03 14:41:52.822624495 +0000 UTC m=+945.149687001" Dec 03 14:41:53.236151 master-0 kubenswrapper[4409]: I1203 14:41:53.236021 4409 generic.go:334] "Generic (PLEG): container finished" podID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerID="69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516" exitCode=0 Dec 03 14:41:53.241423 master-0 kubenswrapper[4409]: I1203 14:41:53.236145 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerDied","Data":"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516"} Dec 03 14:41:53.247117 master-0 kubenswrapper[4409]: I1203 14:41:53.247051 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947gqqq5" Dec 03 14:41:53.247556 master-0 kubenswrapper[4409]: I1203 14:41:53.247503 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-f4ttw" Dec 03 14:41:54.253104 master-0 kubenswrapper[4409]: I1203 14:41:54.252994 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerStarted","Data":"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412"} Dec 03 14:41:54.406214 master-0 kubenswrapper[4409]: I1203 14:41:54.406117 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tpxmq" podStartSLOduration=4.715076856 podStartE2EDuration="7.406089224s" podCreationTimestamp="2025-12-03 14:41:47 +0000 UTC" firstStartedPulling="2025-12-03 14:41:50.978336888 +0000 UTC m=+943.305399394" lastFinishedPulling="2025-12-03 14:41:53.669349256 +0000 UTC m=+945.996411762" observedRunningTime="2025-12-03 14:41:54.403729627 +0000 UTC m=+946.730792143" watchObservedRunningTime="2025-12-03 14:41:54.406089224 +0000 UTC m=+946.733151730" Dec 03 14:41:57.842206 master-0 kubenswrapper[4409]: I1203 14:41:57.842132 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:57.842206 master-0 kubenswrapper[4409]: I1203 14:41:57.842205 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:57.890416 master-0 kubenswrapper[4409]: I1203 14:41:57.890316 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:58.346154 master-0 kubenswrapper[4409]: I1203 14:41:58.346083 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:41:58.376450 master-0 kubenswrapper[4409]: I1203 14:41:58.376289 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:58.376450 master-0 kubenswrapper[4409]: I1203 14:41:58.376378 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:58.422276 master-0 kubenswrapper[4409]: I1203 14:41:58.422220 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:41:58.559675 master-0 kubenswrapper[4409]: I1203 14:41:58.559588 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:41:59.356446 master-0 kubenswrapper[4409]: I1203 14:41:59.354887 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:42:00.313805 master-0 kubenswrapper[4409]: I1203 14:42:00.313634 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tpxmq" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="registry-server" containerID="cri-o://5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412" gracePeriod=2 Dec 03 14:42:00.757186 master-0 kubenswrapper[4409]: I1203 14:42:00.757135 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:42:00.797510 master-0 kubenswrapper[4409]: I1203 14:42:00.797469 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:42:00.915626 master-0 kubenswrapper[4409]: I1203 14:42:00.915556 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities\") pod \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " Dec 03 14:42:00.915924 master-0 kubenswrapper[4409]: I1203 14:42:00.915748 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content\") pod \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " Dec 03 14:42:00.915924 master-0 kubenswrapper[4409]: I1203 14:42:00.915838 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lch\" (UniqueName: \"kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch\") pod \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\" (UID: \"98bb1857-1cb6-456d-9a3b-cb87a706f2ee\") " Dec 03 14:42:00.916601 master-0 kubenswrapper[4409]: I1203 14:42:00.916556 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities" (OuterVolumeSpecName: "utilities") pod "98bb1857-1cb6-456d-9a3b-cb87a706f2ee" (UID: "98bb1857-1cb6-456d-9a3b-cb87a706f2ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:00.918494 master-0 kubenswrapper[4409]: I1203 14:42:00.918436 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch" (OuterVolumeSpecName: "kube-api-access-x4lch") pod "98bb1857-1cb6-456d-9a3b-cb87a706f2ee" (UID: "98bb1857-1cb6-456d-9a3b-cb87a706f2ee"). InnerVolumeSpecName "kube-api-access-x4lch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:42:00.936668 master-0 kubenswrapper[4409]: I1203 14:42:00.936573 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98bb1857-1cb6-456d-9a3b-cb87a706f2ee" (UID: "98bb1857-1cb6-456d-9a3b-cb87a706f2ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:01.018342 master-0 kubenswrapper[4409]: I1203 14:42:01.018213 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:01.018342 master-0 kubenswrapper[4409]: I1203 14:42:01.018259 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:01.018342 master-0 kubenswrapper[4409]: I1203 14:42:01.018273 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lch\" (UniqueName: \"kubernetes.io/projected/98bb1857-1cb6-456d-9a3b-cb87a706f2ee-kube-api-access-x4lch\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:01.190503 master-0 kubenswrapper[4409]: I1203 14:42:01.190443 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:01.190816 master-0 kubenswrapper[4409]: I1203 14:42:01.190580 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:01.241346 master-0 kubenswrapper[4409]: I1203 14:42:01.241276 4409 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:01.322704 master-0 kubenswrapper[4409]: I1203 14:42:01.322656 4409 generic.go:334] "Generic (PLEG): container finished" podID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerID="5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412" exitCode=0 Dec 03 14:42:01.323248 master-0 kubenswrapper[4409]: I1203 14:42:01.323218 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tpxmq" Dec 03 14:42:01.323644 master-0 kubenswrapper[4409]: I1203 14:42:01.323608 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j57qg" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="registry-server" containerID="cri-o://a909e9b9ea344a39898ef3c764f703c1517d75f12132444891c63166f6b2e355" gracePeriod=2 Dec 03 14:42:01.323850 master-0 kubenswrapper[4409]: I1203 14:42:01.323789 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerDied","Data":"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412"} Dec 03 14:42:01.323941 master-0 kubenswrapper[4409]: I1203 14:42:01.323926 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tpxmq" event={"ID":"98bb1857-1cb6-456d-9a3b-cb87a706f2ee","Type":"ContainerDied","Data":"47f8dbe11f500bd2d752669fa6134e4a5c55b17b5b96e561d825f9e65d3f46e0"} Dec 03 14:42:01.324522 master-0 kubenswrapper[4409]: I1203 14:42:01.323954 4409 scope.go:117] "RemoveContainer" containerID="5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412" Dec 03 14:42:01.341987 master-0 kubenswrapper[4409]: I1203 14:42:01.341947 4409 scope.go:117] "RemoveContainer" containerID="69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516" Dec 03 14:42:01.361437 master-0 kubenswrapper[4409]: I1203 14:42:01.361335 4409 scope.go:117] "RemoveContainer" containerID="630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524" Dec 03 14:42:01.374535 master-0 kubenswrapper[4409]: I1203 14:42:01.374481 4409 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:01.423459 master-0 kubenswrapper[4409]: I1203 14:42:01.423401 4409 scope.go:117] "RemoveContainer" containerID="5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412" Dec 03 14:42:01.424110 master-0 kubenswrapper[4409]: E1203 14:42:01.424072 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412\": container with ID starting with 5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412 not found: ID does not exist" containerID="5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412" Dec 03 14:42:01.424220 master-0 kubenswrapper[4409]: I1203 14:42:01.424115 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412"} err="failed to get container status \"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412\": rpc error: code = NotFound desc = could not find container \"5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412\": container with ID starting with 5090eadd4fd142e3420bc5e7f4b5e035367af517f8cc3191970dc3ef02cdd412 not found: ID does not exist" Dec 03 14:42:01.424220 master-0 kubenswrapper[4409]: I1203 14:42:01.424145 4409 scope.go:117] "RemoveContainer" containerID="69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516" Dec 03 14:42:01.424625 master-0 kubenswrapper[4409]: E1203 14:42:01.424468 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516\": container with ID starting with 69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516 not found: ID does not exist" containerID="69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516" Dec 03 14:42:01.424625 master-0 kubenswrapper[4409]: I1203 14:42:01.424492 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516"} err="failed to get container status \"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516\": rpc error: code = NotFound desc = could not find container \"69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516\": container with ID starting with 69c768ac1f71bc0e2e20d5bc9f9c9ee203375e2fd73718953e918c18fcb1d516 not found: ID does not exist" Dec 03 14:42:01.424625 master-0 kubenswrapper[4409]: I1203 14:42:01.424510 4409 scope.go:117] "RemoveContainer" containerID="630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524" Dec 03 14:42:01.425073 master-0 kubenswrapper[4409]: E1203 14:42:01.425043 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524\": container with ID starting with 630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524 not found: ID does not exist" containerID="630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524" Dec 03 14:42:01.425147 master-0 kubenswrapper[4409]: I1203 14:42:01.425068 4409 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524"} err="failed to get container status \"630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524\": rpc error: code = NotFound desc = could not find container \"630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524\": container with ID starting with 630d480958f5b927fe2bb3a4f004003753e357ec30254ba8a8cbaa1f45084524 not found: ID does not exist" Dec 03 14:42:02.334026 master-0 kubenswrapper[4409]: I1203 14:42:02.333955 4409 generic.go:334] "Generic (PLEG): container finished" podID="de45da87-bb9d-4659-a765-72146b349c2d" containerID="a909e9b9ea344a39898ef3c764f703c1517d75f12132444891c63166f6b2e355" exitCode=0 Dec 03 14:42:02.334926 master-0 kubenswrapper[4409]: I1203 14:42:02.334040 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerDied","Data":"a909e9b9ea344a39898ef3c764f703c1517d75f12132444891c63166f6b2e355"} Dec 03 14:42:02.695139 master-0 kubenswrapper[4409]: I1203 14:42:02.695084 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:42:02.851387 master-0 kubenswrapper[4409]: I1203 14:42:02.851315 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content\") pod \"de45da87-bb9d-4659-a765-72146b349c2d\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " Dec 03 14:42:02.851641 master-0 kubenswrapper[4409]: I1203 14:42:02.851404 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities\") pod \"de45da87-bb9d-4659-a765-72146b349c2d\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " Dec 03 14:42:02.851641 master-0 kubenswrapper[4409]: I1203 14:42:02.851523 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmvbr\" (UniqueName: \"kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr\") pod \"de45da87-bb9d-4659-a765-72146b349c2d\" (UID: \"de45da87-bb9d-4659-a765-72146b349c2d\") " Dec 03 14:42:02.852441 master-0 kubenswrapper[4409]: I1203 14:42:02.852391 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities" (OuterVolumeSpecName: "utilities") pod "de45da87-bb9d-4659-a765-72146b349c2d" (UID: "de45da87-bb9d-4659-a765-72146b349c2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:02.854539 master-0 kubenswrapper[4409]: I1203 14:42:02.854461 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr" (OuterVolumeSpecName: "kube-api-access-cmvbr") pod "de45da87-bb9d-4659-a765-72146b349c2d" (UID: "de45da87-bb9d-4659-a765-72146b349c2d"). InnerVolumeSpecName "kube-api-access-cmvbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:42:02.904961 master-0 kubenswrapper[4409]: I1203 14:42:02.904790 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "de45da87-bb9d-4659-a765-72146b349c2d" (UID: "de45da87-bb9d-4659-a765-72146b349c2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:02.953820 master-0 kubenswrapper[4409]: I1203 14:42:02.953722 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:02.953820 master-0 kubenswrapper[4409]: I1203 14:42:02.953794 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de45da87-bb9d-4659-a765-72146b349c2d-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:02.953820 master-0 kubenswrapper[4409]: I1203 14:42:02.953808 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmvbr\" (UniqueName: \"kubernetes.io/projected/de45da87-bb9d-4659-a765-72146b349c2d-kube-api-access-cmvbr\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:03.156964 master-0 kubenswrapper[4409]: I1203 14:42:03.156817 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:42:03.173988 master-0 kubenswrapper[4409]: I1203 14:42:03.173919 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tpxmq"] Dec 03 14:42:03.348877 master-0 kubenswrapper[4409]: I1203 14:42:03.348821 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j57qg" event={"ID":"de45da87-bb9d-4659-a765-72146b349c2d","Type":"ContainerDied","Data":"1b8f5b55413201d0abcd7303573259201ea9215fd66a4587245566204b0b5191"} Dec 03 14:42:03.349526 master-0 kubenswrapper[4409]: I1203 14:42:03.348893 4409 scope.go:117] "RemoveContainer" containerID="a909e9b9ea344a39898ef3c764f703c1517d75f12132444891c63166f6b2e355" Dec 03 14:42:03.349526 master-0 kubenswrapper[4409]: I1203 14:42:03.349334 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j57qg" Dec 03 14:42:03.373368 master-0 kubenswrapper[4409]: I1203 14:42:03.373314 4409 scope.go:117] "RemoveContainer" containerID="aa68f81b105e42c2e8a1ba976cb81908e42d1816c4baba6bcc38d01966a01f99" Dec 03 14:42:03.415832 master-0 kubenswrapper[4409]: I1203 14:42:03.415779 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:42:03.417262 master-0 kubenswrapper[4409]: I1203 14:42:03.417226 4409 scope.go:117] "RemoveContainer" containerID="32bf0d16e8ca082c1b621328e546f10504d37e19794bc10c96d965f28fd6e1a4" Dec 03 14:42:03.430523 master-0 kubenswrapper[4409]: I1203 14:42:03.430431 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j57qg"] Dec 03 14:42:03.828599 master-0 kubenswrapper[4409]: I1203 14:42:03.828527 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" path="/var/lib/kubelet/pods/98bb1857-1cb6-456d-9a3b-cb87a706f2ee/volumes" Dec 03 14:42:03.829401 master-0 kubenswrapper[4409]: I1203 14:42:03.829368 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de45da87-bb9d-4659-a765-72146b349c2d" path="/var/lib/kubelet/pods/de45da87-bb9d-4659-a765-72146b349c2d/volumes" Dec 03 14:42:04.953274 master-0 kubenswrapper[4409]: I1203 14:42:04.953192 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:42:04.953968 master-0 kubenswrapper[4409]: I1203 14:42:04.953485 4409 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qc57x" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="registry-server" containerID="cri-o://241e65a3479cf1b72b5a8381496dce0d53bddde3d16102150df4ae9f7fe40874" gracePeriod=2 Dec 03 14:42:05.379195 master-0 kubenswrapper[4409]: I1203 14:42:05.374611 4409 generic.go:334] "Generic (PLEG): container finished" podID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerID="241e65a3479cf1b72b5a8381496dce0d53bddde3d16102150df4ae9f7fe40874" exitCode=0 Dec 03 14:42:05.379195 master-0 kubenswrapper[4409]: I1203 14:42:05.374697 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerDied","Data":"241e65a3479cf1b72b5a8381496dce0d53bddde3d16102150df4ae9f7fe40874"} Dec 03 14:42:05.439242 master-0 kubenswrapper[4409]: I1203 14:42:05.439142 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:05.508209 master-0 kubenswrapper[4409]: I1203 14:42:05.508128 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities\") pod \"cc99331f-8536-4958-ac8c-1c0de7de7596\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " Dec 03 14:42:05.508507 master-0 kubenswrapper[4409]: I1203 14:42:05.508340 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content\") pod \"cc99331f-8536-4958-ac8c-1c0de7de7596\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " Dec 03 14:42:05.508507 master-0 kubenswrapper[4409]: I1203 14:42:05.508408 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqdt\" (UniqueName: \"kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt\") pod \"cc99331f-8536-4958-ac8c-1c0de7de7596\" (UID: \"cc99331f-8536-4958-ac8c-1c0de7de7596\") " Dec 03 14:42:05.509217 master-0 kubenswrapper[4409]: I1203 14:42:05.509160 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities" (OuterVolumeSpecName: "utilities") pod "cc99331f-8536-4958-ac8c-1c0de7de7596" (UID: "cc99331f-8536-4958-ac8c-1c0de7de7596"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:05.512422 master-0 kubenswrapper[4409]: I1203 14:42:05.512347 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt" (OuterVolumeSpecName: "kube-api-access-vzqdt") pod "cc99331f-8536-4958-ac8c-1c0de7de7596" (UID: "cc99331f-8536-4958-ac8c-1c0de7de7596"). InnerVolumeSpecName "kube-api-access-vzqdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:42:05.567513 master-0 kubenswrapper[4409]: I1203 14:42:05.567441 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc99331f-8536-4958-ac8c-1c0de7de7596" (UID: "cc99331f-8536-4958-ac8c-1c0de7de7596"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 03 14:42:05.611253 master-0 kubenswrapper[4409]: I1203 14:42:05.610863 4409 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-catalog-content\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:05.611253 master-0 kubenswrapper[4409]: I1203 14:42:05.610959 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqdt\" (UniqueName: \"kubernetes.io/projected/cc99331f-8536-4958-ac8c-1c0de7de7596-kube-api-access-vzqdt\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:05.611253 master-0 kubenswrapper[4409]: I1203 14:42:05.610975 4409 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc99331f-8536-4958-ac8c-1c0de7de7596-utilities\") on node \"master-0\" DevicePath \"\"" Dec 03 14:42:06.388079 master-0 kubenswrapper[4409]: I1203 14:42:06.387998 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qc57x" event={"ID":"cc99331f-8536-4958-ac8c-1c0de7de7596","Type":"ContainerDied","Data":"2da9f4894535602085caa72ae4983db1e2d5fd1ead5299115a7a495c7602e9be"} Dec 03 14:42:06.388079 master-0 kubenswrapper[4409]: I1203 14:42:06.388085 4409 scope.go:117] "RemoveContainer" containerID="241e65a3479cf1b72b5a8381496dce0d53bddde3d16102150df4ae9f7fe40874" Dec 03 14:42:06.388899 master-0 kubenswrapper[4409]: I1203 14:42:06.388155 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qc57x" Dec 03 14:42:06.418373 master-0 kubenswrapper[4409]: I1203 14:42:06.418337 4409 scope.go:117] "RemoveContainer" containerID="11e80892a6689e8b3eb694e96b72edfe937d7389098ec22b346845719eab68ea" Dec 03 14:42:06.439810 master-0 kubenswrapper[4409]: I1203 14:42:06.439747 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:42:06.448189 master-0 kubenswrapper[4409]: I1203 14:42:06.448147 4409 scope.go:117] "RemoveContainer" containerID="90bec9e5b1ab207a8c43d1628b2d22404a6c23ee24a2867a3987f61dcbc937fd" Dec 03 14:42:06.449360 master-0 kubenswrapper[4409]: I1203 14:42:06.449310 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qc57x"] Dec 03 14:42:07.826283 master-0 kubenswrapper[4409]: I1203 14:42:07.826218 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" path="/var/lib/kubelet/pods/cc99331f-8536-4958-ac8c-1c0de7de7596/volumes" Dec 03 14:45:00.214715 master-0 kubenswrapper[4409]: I1203 14:45:00.214611 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x"] Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215527 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215559 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215578 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="registry-server" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215590 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="registry-server" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215617 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215631 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215655 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="registry-server" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215668 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="registry-server" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215694 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215709 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215730 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215742 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215768 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215780 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="extract-utilities" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215808 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215821 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="extract-content" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: E1203 14:45:00.215850 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="registry-server" Dec 03 14:45:00.215966 master-0 kubenswrapper[4409]: I1203 14:45:00.215862 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="registry-server" Dec 03 14:45:00.217370 master-0 kubenswrapper[4409]: I1203 14:45:00.216282 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="de45da87-bb9d-4659-a765-72146b349c2d" containerName="registry-server" Dec 03 14:45:00.217370 master-0 kubenswrapper[4409]: I1203 14:45:00.216324 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="98bb1857-1cb6-456d-9a3b-cb87a706f2ee" containerName="registry-server" Dec 03 14:45:00.217370 master-0 kubenswrapper[4409]: I1203 14:45:00.216350 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc99331f-8536-4958-ac8c-1c0de7de7596" containerName="registry-server" Dec 03 14:45:00.219069 master-0 kubenswrapper[4409]: I1203 14:45:00.217487 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.225302 master-0 kubenswrapper[4409]: I1203 14:45:00.225230 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x"] Dec 03 14:45:00.225638 master-0 kubenswrapper[4409]: I1203 14:45:00.225594 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 03 14:45:00.225908 master-0 kubenswrapper[4409]: I1203 14:45:00.225781 4409 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-qldhm" Dec 03 14:45:00.346044 master-0 kubenswrapper[4409]: I1203 14:45:00.345941 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.346460 master-0 kubenswrapper[4409]: I1203 14:45:00.346118 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxx6p\" (UniqueName: \"kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.346460 master-0 kubenswrapper[4409]: I1203 14:45:00.346337 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.448236 master-0 kubenswrapper[4409]: I1203 14:45:00.448120 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.448675 master-0 kubenswrapper[4409]: I1203 14:45:00.448284 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.448675 master-0 kubenswrapper[4409]: I1203 14:45:00.448409 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxx6p\" (UniqueName: \"kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.449613 master-0 kubenswrapper[4409]: I1203 14:45:00.449563 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.454390 master-0 kubenswrapper[4409]: I1203 14:45:00.454341 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.465451 master-0 kubenswrapper[4409]: I1203 14:45:00.465277 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxx6p\" (UniqueName: \"kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p\") pod \"collect-profiles-29412885-9lr8x\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.554736 master-0 kubenswrapper[4409]: I1203 14:45:00.554663 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:00.980553 master-0 kubenswrapper[4409]: I1203 14:45:00.980484 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x"] Dec 03 14:45:01.850883 master-0 kubenswrapper[4409]: I1203 14:45:01.850829 4409 generic.go:334] "Generic (PLEG): container finished" podID="17e3682f-f58f-4794-9402-0f6cacd0bf9c" containerID="fb98400cdc688a3bfdfc5808402455bfb7e517238f49c580fd765cb8f72dbbe9" exitCode=0 Dec 03 14:45:01.851650 master-0 kubenswrapper[4409]: I1203 14:45:01.850893 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" event={"ID":"17e3682f-f58f-4794-9402-0f6cacd0bf9c","Type":"ContainerDied","Data":"fb98400cdc688a3bfdfc5808402455bfb7e517238f49c580fd765cb8f72dbbe9"} Dec 03 14:45:01.851650 master-0 kubenswrapper[4409]: I1203 14:45:01.850997 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" event={"ID":"17e3682f-f58f-4794-9402-0f6cacd0bf9c","Type":"ContainerStarted","Data":"cceee7eb1eaeded854989873e11de7829af9d62d66135dc5d3d3f9d736273087"} Dec 03 14:45:03.225059 master-0 kubenswrapper[4409]: I1203 14:45:03.224950 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:03.407721 master-0 kubenswrapper[4409]: I1203 14:45:03.407539 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume\") pod \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " Dec 03 14:45:03.408567 master-0 kubenswrapper[4409]: I1203 14:45:03.408541 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxx6p\" (UniqueName: \"kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p\") pod \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " Dec 03 14:45:03.409334 master-0 kubenswrapper[4409]: I1203 14:45:03.408642 4409 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume\") pod \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\" (UID: \"17e3682f-f58f-4794-9402-0f6cacd0bf9c\") " Dec 03 14:45:03.414957 master-0 kubenswrapper[4409]: I1203 14:45:03.409307 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "17e3682f-f58f-4794-9402-0f6cacd0bf9c" (UID: "17e3682f-f58f-4794-9402-0f6cacd0bf9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 03 14:45:03.429684 master-0 kubenswrapper[4409]: I1203 14:45:03.429569 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p" (OuterVolumeSpecName: "kube-api-access-gxx6p") pod "17e3682f-f58f-4794-9402-0f6cacd0bf9c" (UID: "17e3682f-f58f-4794-9402-0f6cacd0bf9c"). InnerVolumeSpecName "kube-api-access-gxx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 03 14:45:03.430664 master-0 kubenswrapper[4409]: I1203 14:45:03.430590 4409 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17e3682f-f58f-4794-9402-0f6cacd0bf9c" (UID: "17e3682f-f58f-4794-9402-0f6cacd0bf9c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 03 14:45:03.512365 master-0 kubenswrapper[4409]: I1203 14:45:03.512221 4409 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e3682f-f58f-4794-9402-0f6cacd0bf9c-config-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:45:03.512365 master-0 kubenswrapper[4409]: I1203 14:45:03.512297 4409 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17e3682f-f58f-4794-9402-0f6cacd0bf9c-secret-volume\") on node \"master-0\" DevicePath \"\"" Dec 03 14:45:03.512365 master-0 kubenswrapper[4409]: I1203 14:45:03.512310 4409 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxx6p\" (UniqueName: \"kubernetes.io/projected/17e3682f-f58f-4794-9402-0f6cacd0bf9c-kube-api-access-gxx6p\") on node \"master-0\" DevicePath \"\"" Dec 03 14:45:03.872184 master-0 kubenswrapper[4409]: I1203 14:45:03.872098 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" event={"ID":"17e3682f-f58f-4794-9402-0f6cacd0bf9c","Type":"ContainerDied","Data":"cceee7eb1eaeded854989873e11de7829af9d62d66135dc5d3d3f9d736273087"} Dec 03 14:45:03.872184 master-0 kubenswrapper[4409]: I1203 14:45:03.872166 4409 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cceee7eb1eaeded854989873e11de7829af9d62d66135dc5d3d3f9d736273087" Dec 03 14:45:03.872513 master-0 kubenswrapper[4409]: I1203 14:45:03.872196 4409 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29412885-9lr8x" Dec 03 14:45:04.335250 master-0 kubenswrapper[4409]: I1203 14:45:04.335119 4409 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl"] Dec 03 14:45:04.347050 master-0 kubenswrapper[4409]: I1203 14:45:04.346949 4409 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29412840-nfbpl"] Dec 03 14:45:05.827781 master-0 kubenswrapper[4409]: I1203 14:45:05.827690 4409 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c314fa4-1222-42cf-b87a-f2cd19e67dde" path="/var/lib/kubelet/pods/3c314fa4-1222-42cf-b87a-f2cd19e67dde/volumes" Dec 03 14:45:08.598072 master-0 kubenswrapper[4409]: E1203 14:45:08.597999 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c\": container with ID starting with 232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c not found: ID does not exist" containerID="232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c" Dec 03 14:45:08.598072 master-0 kubenswrapper[4409]: I1203 14:45:08.598077 4409 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c" err="rpc error: code = NotFound desc = could not find container \"232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c\": container with ID starting with 232e8964cc414f9b57b0994b83b3eb26c6d23f52a2fe8e9c25693156c2f78f1c not found: ID does not exist" Dec 03 14:46:29.550406 master-0 kubenswrapper[4409]: I1203 14:46:29.550080 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7lc2b/must-gather-b4484"] Dec 03 14:46:29.553993 master-0 kubenswrapper[4409]: E1203 14:46:29.550645 4409 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e3682f-f58f-4794-9402-0f6cacd0bf9c" containerName="collect-profiles" Dec 03 14:46:29.553993 master-0 kubenswrapper[4409]: I1203 14:46:29.550668 4409 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e3682f-f58f-4794-9402-0f6cacd0bf9c" containerName="collect-profiles" Dec 03 14:46:29.553993 master-0 kubenswrapper[4409]: I1203 14:46:29.550919 4409 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e3682f-f58f-4794-9402-0f6cacd0bf9c" containerName="collect-profiles" Dec 03 14:46:29.553993 master-0 kubenswrapper[4409]: I1203 14:46:29.552152 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.560540 master-0 kubenswrapper[4409]: I1203 14:46:29.559509 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7lc2b"/"kube-root-ca.crt" Dec 03 14:46:29.560540 master-0 kubenswrapper[4409]: I1203 14:46:29.559719 4409 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-7lc2b"/"openshift-service-ca.crt" Dec 03 14:46:29.610032 master-0 kubenswrapper[4409]: I1203 14:46:29.608951 4409 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-7lc2b/must-gather-trsd6"] Dec 03 14:46:29.613701 master-0 kubenswrapper[4409]: I1203 14:46:29.611882 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.625416 master-0 kubenswrapper[4409]: I1203 14:46:29.625340 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7lc2b/must-gather-b4484"] Dec 03 14:46:29.626438 master-0 kubenswrapper[4409]: I1203 14:46:29.626394 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8abfb93-cb68-42ee-9b11-6550464e8512-must-gather-output\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.626498 master-0 kubenswrapper[4409]: I1203 14:46:29.626454 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47e169ee-be71-4b40-851c-3888d898052e-must-gather-output\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.626498 master-0 kubenswrapper[4409]: I1203 14:46:29.626495 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwxt\" (UniqueName: \"kubernetes.io/projected/47e169ee-be71-4b40-851c-3888d898052e-kube-api-access-gpwxt\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.626594 master-0 kubenswrapper[4409]: I1203 14:46:29.626561 4409 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7g5\" (UniqueName: \"kubernetes.io/projected/f8abfb93-cb68-42ee-9b11-6550464e8512-kube-api-access-rd7g5\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.642900 master-0 kubenswrapper[4409]: I1203 14:46:29.642822 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7lc2b/must-gather-trsd6"] Dec 03 14:46:29.728900 master-0 kubenswrapper[4409]: I1203 14:46:29.728842 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8abfb93-cb68-42ee-9b11-6550464e8512-must-gather-output\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.729263 master-0 kubenswrapper[4409]: I1203 14:46:29.729246 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47e169ee-be71-4b40-851c-3888d898052e-must-gather-output\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.729433 master-0 kubenswrapper[4409]: I1203 14:46:29.729412 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwxt\" (UniqueName: \"kubernetes.io/projected/47e169ee-be71-4b40-851c-3888d898052e-kube-api-access-gpwxt\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.729604 master-0 kubenswrapper[4409]: I1203 14:46:29.729584 4409 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7g5\" (UniqueName: \"kubernetes.io/projected/f8abfb93-cb68-42ee-9b11-6550464e8512-kube-api-access-rd7g5\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.731339 master-0 kubenswrapper[4409]: I1203 14:46:29.729866 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/47e169ee-be71-4b40-851c-3888d898052e-must-gather-output\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.731339 master-0 kubenswrapper[4409]: I1203 14:46:29.731081 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f8abfb93-cb68-42ee-9b11-6550464e8512-must-gather-output\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.748969 master-0 kubenswrapper[4409]: I1203 14:46:29.748905 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7g5\" (UniqueName: \"kubernetes.io/projected/f8abfb93-cb68-42ee-9b11-6550464e8512-kube-api-access-rd7g5\") pod \"must-gather-b4484\" (UID: \"f8abfb93-cb68-42ee-9b11-6550464e8512\") " pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.761037 master-0 kubenswrapper[4409]: I1203 14:46:29.760904 4409 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwxt\" (UniqueName: \"kubernetes.io/projected/47e169ee-be71-4b40-851c-3888d898052e-kube-api-access-gpwxt\") pod \"must-gather-trsd6\" (UID: \"47e169ee-be71-4b40-851c-3888d898052e\") " pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:29.944499 master-0 kubenswrapper[4409]: I1203 14:46:29.944423 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7lc2b/must-gather-b4484" Dec 03 14:46:29.954692 master-0 kubenswrapper[4409]: I1203 14:46:29.954655 4409 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-7lc2b/must-gather-trsd6" Dec 03 14:46:30.553940 master-0 kubenswrapper[4409]: I1203 14:46:30.553871 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7lc2b/must-gather-trsd6"] Dec 03 14:46:30.565977 master-0 kubenswrapper[4409]: I1203 14:46:30.565924 4409 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 03 14:46:30.566165 master-0 kubenswrapper[4409]: I1203 14:46:30.566061 4409 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-7lc2b/must-gather-b4484"] Dec 03 14:46:30.568094 master-0 kubenswrapper[4409]: W1203 14:46:30.567514 4409 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8abfb93_cb68_42ee_9b11_6550464e8512.slice/crio-297f03c111ff585940eb027d765a583125fd2846537efbe615aeeb60ba779d08 WatchSource:0}: Error finding container 297f03c111ff585940eb027d765a583125fd2846537efbe615aeeb60ba779d08: Status 404 returned error can't find the container with id 297f03c111ff585940eb027d765a583125fd2846537efbe615aeeb60ba779d08 Dec 03 14:46:30.833391 master-0 kubenswrapper[4409]: I1203 14:46:30.833297 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7lc2b/must-gather-trsd6" event={"ID":"47e169ee-be71-4b40-851c-3888d898052e","Type":"ContainerStarted","Data":"3b641f22d99c31787495205fc4a88156c88e7b0a64ca77c26f387dcd33ebf615"} Dec 03 14:46:30.834696 master-0 kubenswrapper[4409]: I1203 14:46:30.834616 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7lc2b/must-gather-b4484" event={"ID":"f8abfb93-cb68-42ee-9b11-6550464e8512","Type":"ContainerStarted","Data":"297f03c111ff585940eb027d765a583125fd2846537efbe615aeeb60ba779d08"} Dec 03 14:46:32.853342 master-0 kubenswrapper[4409]: I1203 14:46:32.853282 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7lc2b/must-gather-trsd6" event={"ID":"47e169ee-be71-4b40-851c-3888d898052e","Type":"ContainerStarted","Data":"665a2baa232318c37579c60e75bfb57fa36a2e96c318efe146bb21138f354814"} Dec 03 14:46:32.853342 master-0 kubenswrapper[4409]: I1203 14:46:32.853342 4409 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-7lc2b/must-gather-trsd6" event={"ID":"47e169ee-be71-4b40-851c-3888d898052e","Type":"ContainerStarted","Data":"a69653c8e6a663e81d0c66c0af35bd4b10070be48819128f20c78f2ef2c5d6d9"} Dec 03 14:46:32.941892 master-0 kubenswrapper[4409]: I1203 14:46:32.941742 4409 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-7lc2b/must-gather-trsd6" podStartSLOduration=2.480350501 podStartE2EDuration="3.941703801s" podCreationTimestamp="2025-12-03 14:46:29 +0000 UTC" firstStartedPulling="2025-12-03 14:46:30.565808489 +0000 UTC m=+1222.892870995" lastFinishedPulling="2025-12-03 14:46:32.027161789 +0000 UTC m=+1224.354224295" observedRunningTime="2025-12-03 14:46:32.936281118 +0000 UTC m=+1225.263343654" watchObservedRunningTime="2025-12-03 14:46:32.941703801 +0000 UTC m=+1225.268766307" Dec 03 14:46:33.555700 master-0 kubenswrapper[4409]: I1203 14:46:33.555627 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7c49fbfc6f-7krqx_ec89938d-35a5-46ba-8c63-12489db18cbd/cluster-version-operator/5.log" Dec 03 14:46:37.041652 master-0 kubenswrapper[4409]: I1203 14:46:37.041561 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcdctl/2.log" Dec 03 14:46:37.403481 master-0 kubenswrapper[4409]: I1203 14:46:37.403415 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd/1.log" Dec 03 14:46:37.545543 master-0 kubenswrapper[4409]: I1203 14:46:37.545487 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-metrics/1.log" Dec 03 14:46:37.740103 master-0 kubenswrapper[4409]: I1203 14:46:37.737440 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-readyz/1.log" Dec 03 14:46:37.766457 master-0 kubenswrapper[4409]: I1203 14:46:37.763286 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-rev/1.log" Dec 03 14:46:37.797244 master-0 kubenswrapper[4409]: I1203 14:46:37.795407 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/setup/1.log" Dec 03 14:46:37.877026 master-0 kubenswrapper[4409]: I1203 14:46:37.871360 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-ensure-env-vars/1.log" Dec 03 14:46:37.995996 master-0 kubenswrapper[4409]: I1203 14:46:37.995892 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_4dd8b778e190b1975a0a8fad534da6dd/etcd-resources-copy/1.log" Dec 03 14:46:38.041310 master-0 kubenswrapper[4409]: E1203 14:46:38.041238 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e\": container with ID starting with 8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e not found: ID does not exist" containerID="8d134ccd313903414f3c87188621922dd3739a31023f139786ec39623a1f122e" Dec 03 14:46:38.063955 master-0 kubenswrapper[4409]: I1203 14:46:38.063605 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-nrqx6_7af643d9-084b-4e46-ae72-ae875ec0560d/nmstate-console-plugin/0.log" Dec 03 14:46:38.083411 master-0 kubenswrapper[4409]: E1203 14:46:38.083345 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60\": container with ID starting with b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60 not found: ID does not exist" containerID="b7775fb41fdb5fba5482582075e28a7ea5fd0a8cb1197413e24647825cdb3c60" Dec 03 14:46:38.103643 master-0 kubenswrapper[4409]: I1203 14:46:38.103283 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-6x7jt_95578784-6632-496e-933a-446221fa7d21/nmstate-handler/0.log" Dec 03 14:46:38.122227 master-0 kubenswrapper[4409]: E1203 14:46:38.122157 4409 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3\": container with ID starting with 8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3 not found: ID does not exist" containerID="8ae2b9b0e522977680f160dad0bb5c106f95425737d516a6cf52a119b3a021c3" Dec 03 14:46:38.129143 master-0 kubenswrapper[4409]: I1203 14:46:38.129086 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdqp5_fc7a4702-dbd8-4803-b7b4-5b7e51e42bde/nmstate-metrics/0.log" Dec 03 14:46:38.184029 master-0 kubenswrapper[4409]: I1203 14:46:38.180808 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jdqp5_fc7a4702-dbd8-4803-b7b4-5b7e51e42bde/kube-rbac-proxy/0.log" Dec 03 14:46:38.204019 master-0 kubenswrapper[4409]: I1203 14:46:38.200689 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-ddls5_563d4899-8cf3-4ccc-965b-4b63573da5f7/nmstate-operator/0.log" Dec 03 14:46:38.245315 master-0 kubenswrapper[4409]: I1203 14:46:38.245229 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pf6cm_930fd3d1-d7ce-49f1-a3ea-1dd7493f0955/controller/0.log" Dec 03 14:46:38.268072 master-0 kubenswrapper[4409]: I1203 14:46:38.266761 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-pf6cm_930fd3d1-d7ce-49f1-a3ea-1dd7493f0955/kube-rbac-proxy/0.log" Dec 03 14:46:38.274042 master-0 kubenswrapper[4409]: I1203 14:46:38.271914 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-7xtkl_dae33b31-3a7f-4ea1-8076-8dee68fcd78e/nmstate-webhook/0.log" Dec 03 14:46:38.325746 master-0 kubenswrapper[4409]: I1203 14:46:38.325705 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g7b5b_22e8cc91-45ad-4646-8682-fdf4be50815c/controller/0.log" Dec 03 14:46:38.527670 master-0 kubenswrapper[4409]: I1203 14:46:38.527575 4409 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-g7b5b_22e8cc91-45ad-4646-8682-fdf4be50815c/frr/0.log"